no code implementations • 29 Aug 2023 • Ji-Hoon Kim, Jaehun Kim, Joon Son Chung
In this paper, we propose a novel lip-to-speech system that significantly improves the generation quality by alleviating the one-to-many mapping problem from multiple perspectives.
no code implementations • 28 Feb 2023 • Ji-Hoon Kim, Hong-Sun Yang, Yoon-Cheol Ju, Il-Hwan Kim, Byeong-Yeol Kim
In this paper, we propose CrossSpeech which improves the quality of cross-lingual speech by effectively disentangling speaker and language information in the level of acoustic feature space.
1 code implementation • 2 Dec 2022 • Jinyoung Park, Hyeong Kyu Choi, Juyeon Ko, Hyeonjin Park, Ji-Hoon Kim, Jisu Jeong, KyungMin Kim, Hyunwoo J. Kim
To address these issues, we propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations in a unified manner.
no code implementations • 12 Jul 2022 • Ji-Hoon Kim, Yeo-Reum Park, Jaeyoung Do, Soo-Young Ji, Joo-Young Kim
In this paper, we propose a computational storage platform that can accelerate a large-scale graph-based nearest neighbor search algorithm based on SmartSSD CSD.
1 code implementation • Findings (ACL) 2022 • Yeon Seonwoo, Juhee Son, Jiho Jin, Sang-Woo Lee, Ji-Hoon Kim, Jung-Woo Ha, Alice Oh
These models have shown a significant increase in inference speed, but at the cost of lower QA performance compared to the retriever-reader models.
1 code implementation • CVPR 2022 • Jisoo Mok, Byunggook Na, Ji-Hoon Kim, Dongyoon Han, Sungroh Yoon
To take such non-linear characteristics into account, we introduce Label-Gradient Alignment (LGA), a novel NTK-based metric whose inherent formulation allows it to capture the large amount of non-linear advantage present in modern neural architectures.
no code implementations • NeurIPS 2021 • Sang-Hoon Lee, Ji-Hoon Kim, Hyunseung Chung, Seong-Whan Lee
This insufficiency leads to the converted speech containing source speech style or losing source speech content.
no code implementations • ICLR 2022 • Hyeonmin Ha, Ji-Hoon Kim, Semin Park, Byung-Gon Chun
We propose Supernet with Unbiased Meta-Features for Neural Architecture Search (SUMNAS), a supernet learning strategy based on meta-learning to tackle the knowledge forgetting issue.
no code implementations • 16 Aug 2021 • Ji-Hoon Kim, Sang-Hoon Lee, Ji-Hyun Lee, Hong-Gyu Jung, Seong-Whan Lee
While numerous attempts have been made to the few-shot speaker adaptation system, there is still a gap in terms of speaker similarity to the target speaker depending on the amount of data.
1 code implementation • Findings (ACL) 2021 • Yeon Seonwoo, Sang-Woo Lee, Ji-Hoon Kim, Jung-Woo Ha, Alice Oh
In multi-hop QA, answering complex questions entails iterative document retrieval for finding the missing entity of the question.
2 code implementations • 4 Jun 2021 • Ji-Hoon Kim, Sang-Hoon Lee, Ji-Hyun Lee, Seong-Whan Lee
Although recent works on neural vocoder have improved the quality of synthesized audio, there still exists a gap between generated and ground-truth audio in frequency space.
no code implementations • 1 Jan 2021 • Jonghyun Bae, Ji-Hoon Kim
Data augmentation tuned to datasets and tasks has had great success in various AI applications, such as computer vision, natural language processing, autonomous driving, and bioinformatics.
1 code implementation • COLING 2020 • Sungrae Park, Geewook Kim, Junyeop Lee, Junbum Cha, Ji-Hoon Kim, Hwalsuk Lee
This paper introduces a method that efficiently reduces the computational cost and parameter size of Transformer.
1 code implementation • EMNLP 2020 • Yeon Seonwoo, Ji-Hoon Kim, Jung-Woo Ha, Alice Oh
With experiments on reading comprehension, we show that BLANC outperforms the state-of-the-art QA models, and the performance gap increases as the number of answer text occurrences increases.
no code implementations • 4 Sep 2020 • Heungseok Park, Yoonsoo Nam, Ji-Hoon Kim, Jaegul Choo
HyperTendril takes a novel approach to effectively steering hyperparameter optimization through an iterative, interactive tuning procedure that allows users to refine the search spaces and the configuration of the AutoML method based on their own insights from given results.
no code implementations • CVPR 2021 • Jongwon Choi, Kwang Moo Yi, Ji-Hoon Kim, Jinho Choo, Byoungjip Kim, Jin-Yeop Chang, Youngjune Gwon, Hyung Jin Chang
We show that our method can be applied to classification tasks on multiple different datasets -- including one that is a real-world dataset with heavy data imbalance -- significantly outperforming the state of the art.
no code implementations • ICLR 2019 • Jisung Hwang, Younghoon Kim, Sanghyuk Chun, Jaejun Yoo, Ji-Hoon Kim, Dongyoon Han, Jung-Woo Ha
The checkerboard phenomenon is one of the well-known visual artifacts in the computer vision field.
no code implementations • 8 Oct 2018 • Jinwoong Kim, Minkyu Kim, Heungseok Park, Ernar Kusdavletov, Dongjun Lee, Adrian Kim, Ji-Hoon Kim, Jung-Woo Ha, Nako Sung
Many hyperparameter optimization (HyperOpt) methods assume restricted computing resources and mainly focus on enhancing performance.
no code implementations • 11 Aug 2018 • Tsung-Ting Kuo, Jina Huh, Ji-Hoon Kim, Robert El-Kareh, Siddharth Singh, Stephanie Feudjio Feupe, Vincent Kuri, Gordon Lin, Michele E. Day, Lucila Ohno-Machado, Chun-Nan Hsu
Our study introduces CLEAN (CLinical note rEview and ANnotation), a pre-annotation-based cNLP annotation system to improve clinical note annotation of data elements, and comprehensively compares CLEAN with the widely-used annotation system Brat Rapid Annotation Tool (BRAT).