no code implementations • 18 Apr 2024 • Thibault Castells, Hyoung-Kyu Song, Bo-Kyeong Kim, Shinkook Choi
Latent Diffusion Models (LDMs) have emerged as powerful generative models, known for delivering remarkable results under constrained computational resources.
no code implementations • 18 Apr 2024 • Thibault Castells, Hyoung-Kyu Song, Tairen Piao, Shinkook Choi, Bo-Kyeong Kim, Hanyoung Yim, Changgwun Lee, Jae Gon Kim, Tae-Ho Kim
The intensive computational burden of Stable Diffusion (SD) for text-to-image generation poses a significant hurdle for its practical application.
1 code implementation • 18 Apr 2024 • KyungHwan Shim, Jaewoong Yun, Shinkook Choi
Conventional pruning approaches can only compress and accelerate the MSA module using head pruning, although the head is not an atomic unit.
1 code implementation • 5 Feb 2024 • Bo-Kyeong Kim, Geonmin Kim, Tae-Ho Kim, Thibault Castells, Shinkook Choi, Junho Shin, Hyoung-Kyu Song
Structured pruning of modern large language models (LLMs) has emerged as a way of decreasing their high computational needs.
3 code implementations • 25 May 2023 • Bo-Kyeong Kim, Hyoung-Kyu Song, Thibault Castells, Shinkook Choi
Text-to-image (T2I) generation with Stable Diffusion models (SDMs) involves high computing demands due to billion-scale parameters.
DreamBooth Personalized Generation Image-to-Image Translation
no code implementations • 8 Apr 2023 • Shinkook Choi, Junkyeong Choi
As deep learning advances, edge devices and lightweight neural networks are becoming more important.
no code implementations • 2 Apr 2023 • Bo-Kyeong Kim, Jaemin Kang, Daeun Seo, Hancheol Park, Shinkook Choi, Hyoung-Kyu Song, Hyungshin Kim, Sungsu Lim
Virtual humans have gained considerable attention in numerous industries, e. g., entertainment and e-commerce.
no code implementations • 29 Jun 2022 • Bo-Kyeong Kim, Shinkook Choi, Hancheol Park
Pruning effectively compresses overparameterized models.
1 code implementation • 11 Nov 2019 • Tae-Ho Kim, Sungjae Cho, Shinkook Choi, Sejik Park, Soo-Young Lee
The embedding space of seq2seq-based TTS has abundant information on the text.