no code implementations • NAACL (ACL) 2022 • Hwa-Yeon Kim, Jong-Hwan Kim, Jae-Min Kim
Autoregressive transformer (ART)-based grapheme-to-phoneme (G2P) models have been proposed for bi/multilingual text-to-speech systems.
no code implementations • 6 Sep 2023 • In-Ug Yoon, Tae-Min Choi, Sun-Kyung Lee, Young-Min Kim, Jong-Hwan Kim
To create these IOS classifiers, we encode a bias prompt into the classifiers using our specially designed module, which harnesses key-prompt pairs to pinpoint the IOS features of classes in each session.
1 code implementation • 4 Aug 2023 • Hwan-Soo Choi, Jongoh Jeong, Young Hoo Cho, Kuk-Jin Yoon, Jong-Hwan Kim
Sensor fusion approaches for intelligent self-driving agents remain key to driving scene understanding given visual global contexts acquired from input sensors.
no code implementations • 5 Jun 2023 • Hoyeon Lee, Hyun-Wook Yoon, Jong-Hwan Kim, Jae-Min Kim
We investigate the effectiveness of zero-shot and few-shot cross-lingual transfer for phrase break prediction using a pre-trained multilingual language model.
no code implementations • 26 May 2023 • In-Ug Yoon, Tae-Min Choi, Young-Min Kim, Jong-Hwan Kim
Few-shot class-incremental learning (FSCIL) presents the primary challenge of balancing underfitting to a new session's task and forgetting the tasks from previous sessions.
1 code implementation • 20 Feb 2023 • Tae-Min Choi, Jong-Hwan Kim
In this paper, we explore incremental few-shot object detection (iFSD), which incrementally learns novel classes using only a few examples without revisiting base classes.
1 code implementation • 21 Nov 2022 • Jongoh Jeong, Jong-Hwan Kim
Road scene understanding tasks have recently become crucial for self-driving vehicles.
no code implementations • 20 Sep 2022 • Curie Kim, Ue-Hwan Kim, Jong-Hwan Kim
There have been attempts to detect 3D objects by fusion of stereo camera images and LiDAR sensor data or using LiDAR for pre-training and only monocular images for testing, but there have been less attempts to use only monocular image sequences due to low accuracy.
no code implementations • 20 Sep 2022 • Curie Kim, Yewon Hwang, Jong-Hwan Kim
Reinforcement learning has shown an outstanding performance in the applications of games, particularly in Atari games as well as Go.
1 code implementation • CVPR 2022 • Jin-Man Park, Ue-Hwan Kim, Seon-Hoon Lee, Jong-Hwan Kim
Moreover, we design an evaluation protocol which reflects performance in real-world settings.
1 code implementation • 19 Apr 2021 • Ue-Hwan Kim, Yewon Hwang, Sun-Kyung Lee, Jong-Hwan Kim
Our dataset consists of five sub-datasets in two languages (Korean and English) and amounts to 209, 926 video instances from 122 participants.
no code implementations • 5 Apr 2021 • Dong He, Jie Cheng, Jong-Hwan Kim
This paper proposes the GSECnet - Ground Segmentation network for Edge Computing, an efficient ground segmentation framework of point clouds specifically designed to be deployable on a low-power edge computing unit.
1 code implementation • 23 Mar 2021 • Ue-Hwan Kim, Jong-Hwan Kim
Self-supervised learning of depth map prediction and motion estimation from monocular video sequences is of vital importance -- since it realizes a broad range of tasks in robotics and autonomous vehicles.
1 code implementation • 9 Mar 2021 • Jin-Man Park, Jae-Hyuk Jang, Sahng-Min Yoo, Sun-Kyung Lee, Ue-Hwan Kim, Jong-Hwan Kim
We present a challenging dataset, ChangeSim, aimed at online scene change detection (SCD) and more.
Ranked #2 on Scene Change Detection on ChangeSim
no code implementations • 20 Oct 2020 • Tae-Min Choi, Ji-Su Kang, Jong-Hwan Kim
In RDIS, we generate extra missing values by applying a random drop on the observed values in incomplete data.
1 code implementation • 19 Oct 2020 • Joonhyuk Kim, Sahng-Min Yoo, Gyeong-Moon Park, Jong-Hwan Kim
Our novel ETM framework contains Target-specific Memory (TM) for each target domain to alleviate catastrophic forgetting.
1 code implementation • 23 Sep 2020 • Ue-Hwan Kim, Dongho Ka, Hwasoo Yeo, Jong-Hwan Kim
To achieve the goal, pedestrian orientation recognition and prediction of pedestrian's crossing or not-crossing intention play a central role.
1 code implementation • 14 Nov 2019 • Ue-Hwan Kim, Se-Ho Kim, Jong-Hwan Kim
Intelligent agents need to understand the surrounding environment to provide meaningful services to or interact intelligently with humans.
no code implementations • 22 Aug 2019 • Yong-Ho Yoo, Ue-Hwan Kim, Jong-Hwan Kim
In this paper, we propose a convolutional recurrent reconstructive network (CRRN), which decomposes the anomaly patterns generated by the printer defects, from SPI data.
1 code implementation • 14 Aug 2019 • Ue-Hwan Kim, Jin-Man Park, Taek-Jin Song, Jong-Hwan Kim
We claim the following characteristics for a versatile environment model: accuracy, applicability, usability, and scalability.
1 code implementation • 31 Jul 2019 • Ue-Hwan Kim, Sahng-Min Yoo, Jong-Hwan Kim
Current soft keyboards, however, increase the typo rate due to lack of tactile feedback and degrade the usability of mobile devices due to their large portion on screens.
1 code implementation • 31 Jul 2019 • Ue-Hwan Kim, Jong-Hwan Kim
The service provision with these two main components in a Smart Home environment requires: 1) learning and reasoning algorithms and 2) the integration of robot and IoT systems.
no code implementations • ICLR 2018 • Yong-Ho Yoo, Kook Han, Sanghyun Cho, Kyoung-Chul Koh, Jong-Hwan Kim
We propose the dense RNN, which has the fully connections from each hidden state to multiple preceding hidden states of all layers directly.