no code implementations • 14 May 2025 • Minjun Kim, Jaehyeon Choi, Jongkeun Lee, Wonjin Cho, U Kang
Network quantization has proven to be a powerful approach to reduce the memory and computational demands of deep learning models for deployment on resource-constrained devices.
1 code implementation • 27 Mar 2025 • Minjun Kim, Jaehyeon Choi, SeungJoo Lee, Jinhong Jung, U Kang
In this paper, we propose AugWard (Augmentation-Aware Training with Graph Distance and Consistency Regularization), a novel graph representation learning framework that carefully considers the diversity introduced by graph augmentation.
no code implementations • 27 Jan 2024 • Seungcheol Park, Jaehyeon Choi, Sojin Lee, U Kang
How can we compress language models without sacrificing accuracy?