Search Results for author: Joonyoung Kim

Found 5 papers, 1 papers with code

Attention-aware Post-training Quantization without Backpropagation

no code implementations19 Jun 2024 Junhan Kim, Ho-young Kim, Eulrang Cho, Chungman Lee, Joonyoung Kim, Yongkweon Jeon

Quantization is a promising solution for deploying large-scale language models (LLMs) on resource-constrained devices.

Quantization

Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers

no code implementations14 Feb 2024 Junhan Kim, Kyungphil Park, Chungman Lee, Ho-young Kim, Joonyoung Kim, Yongkweon Jeon

Through extensive experiments on various language models and complexity analysis, we demonstrate that aespa is accurate and efficient in quantizing Transformer models.

Quantization

Neural Sequence-to-grid Module for Learning Symbolic Rules

1 code implementation13 Jan 2021 Segwang Kim, Hyoungwook Nam, Joonyoung Kim, Kyomin Jung

Logical reasoning tasks over symbols, such as learning arithmetic operations and computer program evaluations, have become challenges to deep learning.

Logical Reasoning

Cannot find the paper you are looking for? You can Submit a new open access paper.