no code implementations • 3 Apr 2025 • JangHyun Kim, Minseong Kweon, Jinsun Park, Ukcheol Shin
However, due to the limitations of RGB sensors, existing methods often struggle to achieve reliable performance in harsh environments, such as heavy rain and low-light conditions.
1 code implementation • 28 Mar 2025 • Ukcheol Shin, Jinsun Park
Achieving robust and accurate spatial perception under adverse weather and lighting conditions is crucial for the high-level autonomy of self-driving vehicles and robots.
no code implementations • 24 Jan 2025 • Trong-Binh Nguyen, Minh-Duong Nguyen, Jinsun Park, Quoc-Viet Pham, Won Joo Hwang
In this paper, we introduce a novel approach, dubbed Federated Learning via On-server Matching Gradient (FedOMG), which can \emph{efficiently leverage domain information from distributed domains}.
no code implementations • 6 Mar 2024 • Gyusam Chang, Wonseok Roh, Sujin Jang, Dongwook Lee, Daehyun Ji, Gyeongrok Oh, Jinsun Park, Jinkyu Kim, Sangpil Kim
Recent LiDAR-based 3D Object Detection (3DOD) methods show promising results, but they often do not generalize well to target domains outside the source (or training) data distribution.
1 code implementation • CVPR 2023 • Ukcheol Shin, Jinsun Park, In So Kweon
Secondly, we conduct an exhaustive validation process of monocular and stereo depth estimation algorithms designed on visible spectrum bands to benchmark their performance in the thermal image domain.
1 code implementation • 14 Oct 2022 • Donggeun Yoon, Jinsun Park, Donghyeon Cho
Therefore, there has been a demand for a lightweight alpha matting model due to the limited computational resources of commercial portable devices.
no code implementations • 30 Apr 2022 • Daehan Kim, Minseok Seo, Jinsun Park, Dong-Geol Choi
In this paper, we introduce source domain subset sampling (SDSS) as a new perspective of semi-supervised domain adaptation.
1 code implementation • Computer Vision and Image Understanding 2022 • Francois Rameau, Jinsun Park, Oleksandr Bailo, In So Kweon
In this paper, we present MC-Calib, a novel and robust toolbox dedicated to the calibration of complex synchronized multi-camera systems using an arbitrary number of fiducial marker-based patterns.
1 code implementation • 29 Sep 2021 • Juseong Kim, Jinsun Park, Giltae Song
The proposed SALT consists of two blocks: Transformers and linear layers blocks that take advantage of shared attention matrices.
1 code implementation • ECCV 2020 • Jinsun Park, Kyungdon Joo, Zhe Hu, Chi-Kuei Liu, In So Kweon
In this paper, we propose a robust and efficient end-to-end non-local spatial propagation network for depth completion.
Ranked #1 on
Depth Completion
on NYU-Depth V2
no code implementations • 30 Jul 2019 • Ho-Deok Jang, Sanghyun Woo, Philipp Benz, Jinsun Park, In So Kweon
We present a simple yet effective prediction module for a one-stage detector.
1 code implementation • 11 Jul 2019 • Ukcheol Shin, Jinsun Park, Gyumin Shim, Francois Rameau, In So Kweon
In this paper, we propose a noise-aware exposure control algorithm for robust robot vision.
1 code implementation • Pattern Recognition Letters 2018 • Oleksandr Bailo, Francois Rameau, Kyungdon Joo, Jinsun Park, Oleksandr Bogdan, In So Kweon
Keypoint detection usually results in a large number of keypoints which are mostly clustered, redundant, and noisy.
2 code implementations • ICCV 2017 • Donghyeon Cho, Jinsun Park, Tae-Hyun Oh, Yu-Wing Tai, In So Kweon
Our method implicitly learns an attention map, which leads to a content-aware shift map for image retargeting.
1 code implementation • CVPR 2017 • Jinsun Park, Yu-Wing Tai, Donghyeon Cho, In So Kweon
In this paper, we introduce robust and synergetic hand-crafted features and a simple but efficient deep feature from a convolutional neural network (CNN) architecture for defocus estimation.
Ranked #2 on
Defocus Estimation
on CUHK - Blur Detection Dataset
no code implementations • CVPR 2015 • Hae-Gon Jeon, Jaesik Park, Gyeongmin Choe, Jinsun Park, Yunsu Bok, Yu-Wing Tai, In So Kweon
This paper introduces an algorithm that accurately estimates depth maps using a lenslet light field camera.