1 code implementation • 2 Apr 2024 • Kento Nishi, Junsik Kim, Wanhua Li, Hanspeter Pfister
Multi-task learning has become increasingly popular in the machine learning field, but its practicality is hindered by the need for large, labeled datasets.
1 code implementation • 2 Apr 2024 • Ye Liu, Jixuan He, Wanhua Li, Junsik Kim, Donglai Wei, Hanspeter Pfister, Chang Wen Chen
Video temporal grounding (VTG) is a fine-grained video understanding problem that aims to ground relevant clips in untrimmed videos given natural language queries.
1 code implementation • 31 Mar 2024 • Ye Liu, Jixuan He, Wanhua Li, Junsik Kim, Donglai Wei, Hanspeter Pfister, Chang Wen Chen
Video temporal grounding (VTG) is a fine-grained video understanding problem that aims to ground relevant clips in untrimmed videos given natural language queries.
Ranked #2 on Highlight Detection on QVHighlights
no code implementations • 8 Nov 2023 • Dawit Mureja Argaw, Junsik Kim, In So Kweon
Existing video compression (VC) methods primarily aim to reduce the spatial and temporal redundancies between consecutive frames in a video while preserving its quality.
no code implementations • 12 Oct 2023 • Sukwoong Choi, Hyo Kang, Namil Kim, Junsik Kim
We study how humans learn from AI, exploiting an introduction of an AI-powered Go program (APG) that unexpectedly outperformed the best professional player.
no code implementations • ICCV 2023 • Arda Senocak, Hyeonggon Ryu, Junsik Kim, Tae-Hyun Oh, Hanspeter Pfister, Joon Son Chung
However, prior arts and existing benchmarks do not account for a more important aspect of the problem, cross-modal semantic understanding, which is essential for genuine sound source localization.
no code implementations • 18 Sep 2023 • Minkyung Kim, Junsik Kim, Jongmin Yu, Jun Kyun Choi
In an active learning framework, a model queries samples to be labeled by experts and re-trains the model with the labeled data samples.
no code implementations • 18 Sep 2023 • Minkyung Kim, Jongmin Yu, Junsik Kim, Tae-Hyun Oh, Jun Kyun Choi
Therefore, it has been a common practice to learn normality under the assumption that anomalous data are absent in a training dataset, which we call normality assumption.
no code implementations • 21 Mar 2023 • Yongjin Jeon, Youngtack Oh, Doyoung Jeong, Hyunguk Choi, Junsik Kim
AED-RS Dataset contains satellite images of normal and abnormal situations of 8 open public places from all over the world.
no code implementations • 20 Feb 2023 • Moon Ye-Bin, Dongmin Choi, Yongjin Kwon, Junsik Kim, Tae-Hyun Oh
We address a weakly-supervised low-shot instance segmentation, an annotation-efficient training method to deal with novel classes effectively.
no code implementations • 13 Feb 2023 • Minkyung Kim, Junsik Kim, Jongmin Yu, Jun Kyun Choi
One-class classification has been a prevailing method in building deep anomaly detection models under the assumption that a dataset consisting of normal samples is available.
no code implementations • 19 Jul 2022 • Fei Pan, Sungsu Hur, Seokju Lee, Junsik Kim, In So Kweon
Open compound domain adaptation (OCDA) considers the target domain as the compound of multiple unknown homogeneous subdomains.
no code implementations • 1 Jun 2022 • Fei Pan, Francois Rameau, Junsik Kim, In So Kweon
In this work, we propose a new domain adaptation framework for semantic segmentation with annotated points via active selection.
no code implementations • 12 Feb 2022 • Arda Senocak, Junsik Kim, Tae-Hyun Oh, Hyeonggon Ryu, DIngzeyu Li, In So Kweon
Human brain is continuously inundated with the multisensory information and their complex interactions coming from the outside world at any given moment.
no code implementations • 7 Feb 2022 • Arda Senocak, Hyeonggon Ryu, Junsik Kim, In So Kweon
Thus, these semantically correlated pairs, "hard positives", are mistakenly grouped as negatives.
1 code implementation • 28 Oct 2021 • Jongmin Yu, Hyeontaek Oh, Minkyung Kim, Junsik Kim
In this paper, we propose Normality-Calibrated Autoencoder (NCAE), which can boost anomaly detection performance on the contaminated datasets without any prior information or explicit abnormal samples in the training phase.
no code implementations • 29 Sep 2021 • Dongmin Choi, Moon Ye-Bin, Junsik Kim, Tae-Hyun Oh
We propose the first weakly-supervised few-shot instance segmentation task and a frustratingly simple but strong baseline model, FoxInst.
1 code implementation • 14 Sep 2021 • Jongmin Yu, Junsik Kim, Minkyung Kim, Hyeontaek Oh
However, this achievement requires large-scale and well-annotated datasets.
no code implementations • 19 Apr 2021 • Dawit Mureja Argaw, Junsik Kim, Francois Rameau, Chaoning Zhang, In So Kweon
We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors, and the blurred image as an observation.
no code implementations • 4 Mar 2021 • Dawit Mureja Argaw, Junsik Kim, Francois Rameau, In So Kweon
Abrupt motion of camera or objects in a scene result in a blurry video, and therefore recovering high quality video requires two types of enhancements: visual enhancement and temporal upsampling.
no code implementations • 4 Mar 2021 • Dawit Mureja Argaw, Junsik Kim, Francois Rameau, Jae Won Cho, In So Kweon
A flow estimator network is then used to estimate optical flow from the decoded features in a coarse-to-fine manner.
1 code implementation • 23 Oct 2020 • Chaoning Zhang, Philipp Benz, Dawit Mureja Argaw, Seokju Lee, Junsik Kim, Francois Rameau, Jean-Charles Bazin, In So Kweon
ResNet or DenseNet?
1 code implementation • 20 Nov 2019 • Arda Senocak, Tae-Hyun Oh, Junsik Kim, Ming-Hsuan Yang, In So Kweon
Visual events are usually accompanied by sounds in our daily lives.
no code implementations • 16 Sep 2019 • Seokju Lee, Junsik Kim, Tae-Hyun Oh, Yongseop Jeong, Donggeun Yoo, Stephen Lin, In So Kweon
We postulate that success on this task requires the network to learn semantic and geometric knowledge in the ego-centric view.
1 code implementation • CVPR 2019 • Junsik Kim, Tae-Hyun Oh, Seokju Lee, Fei Pan, In So Kweon
We take an approach to learn a generalizable embedding space for novel tasks.
no code implementations • CVPR 2018 • Arda Senocak, Tae-Hyun Oh, Junsik Kim, Ming-Hsuan Yang, In So Kweon
We show that even with a few supervision, false conclusion is able to be corrected and the source of sound in a visual scene can be localized effectively.
no code implementations • 5 Dec 2017 • Junsik Kim, Seokju Lee, Tae-Hyun Oh, In So Kweon
Recent advances in visual recognition show overarching success by virtue of large amounts of supervised data.
3 code implementations • ICCV 2017 • Seokju Lee, Junsik Kim, Jae Shin Yoon, Seunghak Shin, Oleksandr Bailo, Namil Kim, Tae-Hee Lee, Hyun Seok Hong, Seung-Hoon Han, In So Kweon
In this paper, we propose a unified end-to-end trainable multi-task network that jointly handles lane and road marking detection and recognition that is guided by a vanishing point under adverse weather conditions.
Ranked #1 on Lane Detection on Caltech Lanes Washington
no code implementations • ICCV 2017 • Jae Shin Yoon, Francois Rameau, Junsik Kim, Seokju Lee, Seunghak Shin, In So Kweon
We propose a novel video object segmentation algorithm based on pixel-level matching using Convolutional Neural Networks (CNN).
Ranked #73 on Semi-Supervised Video Object Segmentation on DAVIS 2016
no code implementations • CVPR 2016 • Kyungdon Joo, Tae-Hyun Oh, Junsik Kim, In So Kweon
Given a set of surface normals, we pose a Manhattan Frame (MF) estimation problem as a consensus set maximization that maximizes the number of inliers over the rotation search space.
no code implementations • 12 May 2016 • Kyungdon Joo, Tae-Hyun Oh, Junsik Kim, In So Kweon
Most man-made environments, such as urban and indoor scenes, consist of a set of parallel and orthogonal planar structures.