Search Results for author: Yulhwa Kim

Found 5 papers, 2 papers with code

SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks

1 code implementation14 Feb 2024 Jiwon Song, Kyungseok Oh, Taesu Kim, HyungJun Kim, Yulhwa Kim, Jae-Joon Kim

In this paper, we introduce SLEB, a novel approach designed to streamline LLMs by eliminating redundant transformer blocks.

L4Q: Parameter Efficient Quantization-Aware Training on Large Language Models via LoRA-wise LSQ

no code implementations7 Feb 2024 Hyesung Jeon, Yulhwa Kim, Jae-Joon Kim

In resource-constrained scenarios, PTQ, with its reduced training overhead, is often preferred over QAT, despite the latter's potential for higher accuracy.

In-Context Learning Quantization

Squeezing Large-Scale Diffusion Models for Mobile

no code implementations3 Jul 2023 Jiwoong Choi, Minkyu Kim, Daehyun Ahn, Taesu Kim, Yulhwa Kim, Dongwon Jo, Hyesung Jeon, Jae-Joon Kim, HyungJun Kim

The emergence of diffusion models has greatly broadened the scope of high-fidelity image synthesis, resulting in notable advancements in both practical implementation and academic research.

Image Generation

BitSplit-Net: Multi-bit Deep Neural Network with Bitwise Activation Function

no code implementations23 Mar 2019 Hyungjun Kim, Yulhwa Kim, Sungju Ryu, Jae-Joon Kim

We demonstrate that the BitSplit version of LeNet-5, VGG-9, AlexNet, and ResNet-18 can be trained to have similar classification accuracy at a lower computational cost compared to conventional multi-bit networks with low bit precision (<= 4-bit).

Neural Network-Hardware Co-design for Scalable RRAM-based BNN Accelerators

1 code implementation6 Nov 2018 Yulhwa Kim, HyungJun Kim, Jae-Joon Kim

Recently, RRAM-based Binary Neural Network (BNN) hardware has been gaining interests as it requires 1-bit sense-amp only and eliminates the need for high-resolution ADC and DAC.

Neural Network simulation

Cannot find the paper you are looking for? You can Submit a new open access paper.