Search Results for author: Jinsun Park

Found 16 papers, 10 papers with code

All-day Depth Completion via Thermal-LiDAR Fusion

no code implementations3 Apr 2025 JangHyun Kim, Minseong Kweon, Jinsun Park, Ukcheol Shin

However, due to the limitations of RGB sensors, existing methods often struggle to achieve reliable performance in harsh environments, such as heavy rain and low-light conditions.

All Contrastive Learning +1

Deep Depth Estimation from Thermal Image: Dataset, Benchmark, and Challenges

1 code implementation28 Mar 2025 Ukcheol Shin, Jinsun Park

Achieving robust and accurate spatial perception under adverse weather and lighting conditions is crucial for the high-level autonomy of self-driving vehicles and robots.

Stereo Depth Estimation

Federated Domain Generalization with Data-free On-server Gradient Matching

no code implementations24 Jan 2025 Trong-Binh Nguyen, Minh-Duong Nguyen, Jinsun Park, Quoc-Viet Pham, Won Joo Hwang

In this paper, we introduce a novel approach, dubbed Federated Learning via On-server Matching Gradient (FedOMG), which can \emph{efficiently leverage domain information from distributed domains}.

Domain Generalization Federated Learning

CMDA: Cross-Modal and Domain Adversarial Adaptation for LiDAR-Based 3D Object Detection

no code implementations6 Mar 2024 Gyusam Chang, Wonseok Roh, Sujin Jang, Dongwook Lee, Daehyun Ji, Gyeongrok Oh, Jinsun Park, Jinkyu Kim, Sangpil Kim

Recent LiDAR-based 3D Object Detection (3DOD) methods show promising results, but they often do not generalize well to target domains outside the source (or training) data distribution.

3D Object Detection object-detection +1

Deep Depth Estimation From Thermal Image

1 code implementation CVPR 2023 Ukcheol Shin, Jinsun Park, In So Kweon

Secondly, we conduct an exhaustive validation process of monocular and stereo depth estimation algorithms designed on visible spectrum bands to benchmark their performance in the thermal image domain.

Autonomous Driving Self-Driving Cars +1

Lightweight Alpha Matting Network Using Distillation-Based Channel Pruning

1 code implementation14 Oct 2022 Donggeun Yoon, Jinsun Park, Donghyeon Cho

Therefore, there has been a demand for a lightweight alpha matting model due to the limited computational resources of commercial portable devices.

Image Matting Semantic Segmentation

Source Domain Subset Sampling for Semi-Supervised Domain Adaptation in Semantic Segmentation

no code implementations30 Apr 2022 Daehan Kim, Minseok Seo, Jinsun Park, Dong-Geol Choi

In this paper, we introduce source domain subset sampling (SDSS) as a new perspective of semi-supervised domain adaptation.

Domain Adaptation Semantic Segmentation +1

MC-Calib: A generic and robust calibration toolbox for multi-camera systems

1 code implementation Computer Vision and Image Understanding 2022 Francois Rameau, Jinsun Park, Oleksandr Bailo, In So Kweon

In this paper, we present MC-Calib, a novel and robust toolbox dedicated to the calibration of complex synchronized multi-camera systems using an arbitrary number of fiducial marker-based patterns.

Camera Calibration

SALT : Sharing Attention between Linear layer and Transformer for tabular dataset

1 code implementation29 Sep 2021 Juseong Kim, Jinsun Park, Giltae Song

The proposed SALT consists of two blocks: Transformers and linear layers blocks that take advantage of shared attention matrices.

Deep Learning

Propose-and-Attend Single Shot Detector

no code implementations30 Jul 2019 Ho-Deok Jang, Sanghyun Woo, Philipp Benz, Jinsun Park, In So Kweon

We present a simple yet effective prediction module for a one-stage detector.

A Unified Approach of Multi-scale Deep and Hand-crafted Features for Defocus Estimation

1 code implementation CVPR 2017 Jinsun Park, Yu-Wing Tai, Donghyeon Cho, In So Kweon

In this paper, we introduce robust and synergetic hand-crafted features and a simple but efficient deep feature from a convolutional neural network (CNN) architecture for defocus estimation.

Defocus Estimation Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.