Search Results for author: Euijai Ahn

Found 2 papers, 2 papers with code

NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models

1 code implementation16 Oct 2023 Jongwoo Ko, Seungjoon Park, Yujin Kim, Sumyeong Ahn, Du-Seong Chang, Euijai Ahn, Se-Young Yun

Structured pruning methods have proven effective in reducing the model size and accelerating inference speed in various network architectures such as Transformers.

Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective

1 code implementation3 Feb 2023 Jongwoo Ko, Seungjoon Park, Minchan Jeong, Sukjin Hong, Euijai Ahn, Du-Seong Chang, Se-Young Yun

Knowledge distillation (KD) is a highly promising method for mitigating the computational problems of pre-trained language models (PLMs).

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.