no code implementations • 18 Apr 2024 • Thibault Castells, Hyoung-Kyu Song, Tairen Piao, Shinkook Choi, Bo-Kyeong Kim, Hanyoung Yim, Changgwun Lee, Jae Gon Kim, Tae-Ho Kim
The intensive computational burden of Stable Diffusion (SD) for text-to-image generation poses a significant hurdle for its practical application.
no code implementations • 8 Mar 2024 • Joseph Cho, Fachrina Dewi Puspitasari, Sheng Zheng, Jingyao Zheng, Lik-Hang Lee, Tae-Ho Kim, Choong Seon Hong, Chaoning Zhang
Text-to-video generation marks a significant frontier in the rapidly evolving domain of generative AI, integrating advancements in text-to-image synthesis, video captioning, and text-guided editing.
no code implementations • 5 Feb 2024 • Bo-Kyeong Kim, Geonmin Kim, Tae-Ho Kim, Thibault Castells, Shinkook Choi, Junho Shin, Hyoung-Kyu Song
Structured pruning of modern large language models (LLMs) has emerged as a way of decreasing their high computational needs.
1 code implementation • 15 Dec 2023 • Chaoning Zhang, Dongshen Han, Sheng Zheng, Jinwoo Choi, Tae-Ho Kim, Choong Seon Hong
The efficiency bottleneck of SegEvery with SAM, however, lies in its mask decoder because it needs to first generate numerous masks with redundant grid-search prompts and then perform filtering to obtain the final valid masks.
no code implementations • 21 Dec 2022 • Seongmin Park, Beomseok Kwon, Jieun Lim, Kyuyoung Sim, Tae-Ho Kim, Jungwook Choi
Uniform-precision neural network quantization has gained popularity since it simplifies densely packed arithmetic unit for high computing capability.
no code implementations • 1 Jan 2021 • Seongmin Park, Beomseok Kwon, Kyuyoung Sim, Jieun Lim, Tae-Ho Kim, Jungwook Choi
Uniform-precision neural network quantization has gained popularity thanks to its simple arithmetic unit densely packed for high computing capability.
1 code implementation • 11 Nov 2019 • Tae-Ho Kim, Sungjae Cho, Shinkook Choi, Sejik Park, Soo-Young Lee
The embedding space of seq2seq-based TTS has abundant information on the text.
no code implementations • 3 Sep 2019 • Hyoung-Kyu Song, Ebrahim AlAlkeem, Jaewoong Yun, Tae-Ho Kim, Hyerin Yoo, Dasom Heo, Chan Yeob Yeun, Myungsu Chae
Most research has only focused on single modality or a single task, while the combination of input modality or tasks is yet to be investigated.
no code implementations • 12 Feb 2019 • Dae-Woong Jeong, Jaehun Kim, Young-Seok Kim, Tae-Ho Kim, Myungsu Chae
Existing high-performance deep learning models require very intensive computing.
1 code implementation • 4 Sep 2018 • Myungsu Chae, Tae-Ho Kim, Young Hoon Shin, June-Woo Kim, Soo-Young Lee
In our experiments, emotion and gender recognition with the proposed method yielded a lower joint loss, which is computed as the negative log-likelihood, than using static weights for joint loss.