Search Results for author: Gyeongman Kim

Found 4 papers, 1 papers with code

PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt Tuning

no code implementations20 Feb 2024 Gyeongman Kim, Doohyuk Jang, Eunho Yang

Recent advancements in large language models (LLMs) have raised concerns about inference costs, increasing the need for research into model compression.

Instruction Following Knowledge Distillation +1

Diffusion Video Autoencoders: Toward Temporally Consistent Face Video Editing via Disentangled Video Encoding

no code implementations CVPR 2023 Gyeongman Kim, Hajin Shim, Hyunsu Kim, Yunjey Choi, Junho Kim, Eunho Yang

Inspired by the impressive performance of recent face image editing methods, several studies have been naturally proposed to extend these methods to the face video editing task.

Video Editing

Distilling Linguistic Context for Language Model Compression

1 code implementation EMNLP 2021 Geondo Park, Gyeongman Kim, Eunho Yang

A computationally expensive and memory intensive neural network lies behind the recent success of language representation learning.

Knowledge Distillation Language Modelling +3

Contextual Knowledge Distillation for Transformer Compression

no code implementations1 Jan 2021 Geondo Park, Gyeongman Kim, Eunho Yang

A computationally expensive and memory intensive neural network lies behind the recent success of language representation learning.

Knowledge Distillation Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.