Search Results for author: Jongwoo Lim

Found 20 papers, 8 papers with code

4D Gaussian Splatting in the Wild with Uncertainty-Aware Regularization

no code implementations13 Nov 2024 Mijeong Kim, Jongwoo Lim, Bohyung Han

This approach improves both the performance of novel view synthesis and the quality of training image reconstruction.

Image Reconstruction Novel View Synthesis

HeightLane: BEV Heightmap guided 3D Lane Detection

no code implementations15 Aug 2024 Chaesong Park, Eunbin Seo, Jongwoo Lim

To address the lack of the necessary ground truth (GT) height map in the original OpenLane dataset, we leverage the Waymo dataset and accumulate its LiDAR data to generate a height map for the drivable area of each scene.

3D Lane Detection

Integrating Meshes and 3D Gaussians for Indoor Scene Reconstruction with SAM Mask Guidance

no code implementations23 Jul 2024 Jiyeop Kim, Jongwoo Lim

We use meshes for the room layout of the indoor scene, such as walls, ceilings, and floors, while employing 3D Gaussians for other objects.

Indoor Scene Reconstruction

Unbiased Estimator for Distorted Conics in Camera Calibration

1 code implementation CVPR 2024 Chaehyeon Song, Jaeho Shin, Myung-Hwan Jeon, Jongwoo Lim, Ayoung Kim

Although conics are more informative features than points, the loss of the conic property under distortion has critically limited the utility of conic features in camera calibration.

Camera Calibration

TSDF-Sampling: Efficient Sampling for Neural Surface Field using Truncated Signed Distance Field

no code implementations29 Nov 2023 Chaerin Min, Sehyun Cha, Changhee Won, Jongwoo Lim

Notably, our method is the first approach that can be robustly plug-and-play into a diverse array of neural surface field models, as long as they use the volume rendering technique.

Surface Reconstruction

Gramian Attention Heads are Strong yet Efficient Vision Learners

1 code implementation ICCV 2023 Jongbin Ryu, Dongyoon Han, Jongwoo Lim

We introduce a novel architecture design that enhances expressiveness by incorporating multiple head classifiers (\ie, classification heads) instead of relying on channel expansion or additional building blocks.

Fine-Grained Image Classification Instance Segmentation +2

X-PDNet: Accurate Joint Plane Instance Segmentation and Monocular Depth Estimation with Cross-Task Distillation and Boundary Correction

1 code implementation15 Sep 2023 Cao Dinh Duc, Jongwoo Lim

To overcome these limitations, we propose X-PDNet, a framework for the multitask learning of plane instance segmentation and depth estimation with improvements in the following two aspects.

Instance Segmentation Monocular Depth Estimation +3

Generalized Convolutional Forest Networks for Domain Generalization and Visual Recognition

no code implementations ICLR 2020 Jongbin Ryu, Gitaek Kwon, Ming-Hsuan Yang, Jongwoo Lim

When constructing random forests, it is of prime importance to ensure high accuracy and low correlation of individual tree classifiers for good performance.

Domain Generalization Image Classification +1

OmniSLAM: Omnidirectional Localization and Dense Mapping for Wide-baseline Multi-camera Systems

no code implementations18 Mar 2020 Changhee Won, Hochang Seok, Zhaopeng Cui, Marc Pollefeys, Jongwoo Lim

In this paper, we present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras, which has a 360 degrees coverage of stereo observations of the environment.

Depth Estimation Visual Odometry

Collaborative Training of Balanced Random Forests for Open Set Domain Adaptation

no code implementations10 Feb 2020 Jongbin Ryu, Jiun Bae, Jongwoo Lim

In this paper, we introduce a collaborative training algorithm of balanced random forests with convolutional neural networks for domain adaptation tasks.

Domain Adaptation

OmniMVS: End-to-End Learning for Omnidirectional Stereo Matching

2 code implementations ICCV 2019 Changhee Won, Jongbin Ryu, Jongwoo Lim

The 3D encoder-decoder block takes the aligned feature volume to produce the omnidirectional depth estimate with regularization on uncertain regions utilizing the global context information.

Decoder Depth Estimation +2

SweepNet: Wide-baseline Omnidirectional Depth Estimation

1 code implementation28 Feb 2019 Changhee Won, Jongbin Ryu, Jongwoo Lim

Omnidirectional depth sensing has its advantage over the conventional stereo systems since it enables us to recognize the objects of interest in all directions without any blind regions.

Depth Estimation

ROVO: Robust Omnidirectional Visual Odometry for Wide-baseline Wide-FOV Camera Systems

1 code implementation28 Feb 2019 Hochang Seok, Jongwoo Lim

For more robust and accurate ego-motion estimation we adds three components to the standard VO pipeline, 1) the hybrid projection model for improved feature matching, 2) multi-view P3P RANSAC algorithm for pose estimation, and 3) online update of rig extrinsic parameters.

Motion Estimation Pose Estimation +1

DFT-based Transformation Invariant Pooling Layer for Visual Classification

no code implementations ECCV 2018 Jongbin Ryu, Ming-Hsuan Yang, Jongwoo Lim

The proposed methods are extensively evaluated on various classification tasks using the ImageNet, CUB 2010-2011, MIT Indoors, Caltech 101, FMD and DTD datasets.

Classification General Classification +1

Tracking Persons-of-Interest via Unsupervised Representation Adaptation

2 code implementations5 Oct 2017 Shun Zhang, Jia-Bin Huang, Jongwoo Lim, Yihong Gong, Jinjun Wang, Narendra Ahuja, Ming-Hsuan Yang

Multi-face tracking in unconstrained videos is a challenging problem as faces of one person often appear drastically different in multiple shots due to significant variations in scale, pose, expression, illumination, and make-up.

Clustering Triplet

Efficient Feature Matching by Progressive Candidate Search

no code implementations20 Jan 2017 Sehyung Lee, Jongwoo Lim, Il Hong Suh

We present a novel feature matching algorithm that systematically utilizes the geometric properties of features such as position, scale, and orientation, in addition to the conventional descriptor vectors.

Online multi-object tracking via robust collaborative model and sample selection

1 code implementation Computer Vision and Image Understanding 2017 Mohamed A. Naiel, M. Omair Ahmad, M.N.S. Swamy, Jongwoo Lim, Ming-Hsuan Yang

For each frame, we construct an association between detections and trackers, and treat each detected image region as a key sample, for online update, if it is associated to a tracker.

Multi-Object Tracking Object +3

Hedged Deep Tracking

no code implementations CVPR 2016 Yuankai Qi, Shengping Zhang, Lei Qin, Hongxun Yao, Qingming Huang, Jongwoo Lim, Ming-Hsuan Yang

In recent years, several methods have been developed to utilize hierarchical features learned from a deep convolutional neural network (CNN) for visual tracking.

Visual Tracking

UA-DETRAC: A New Benchmark and Protocol for Multi-Object Detection and Tracking

no code implementations13 Nov 2015 Longyin Wen, Dawei Du, Zhaowei Cai, Zhen Lei, Ming-Ching Chang, Honggang Qi, Jongwoo Lim, Ming-Hsuan Yang, Siwei Lyu

In this work, we perform a comprehensive quantitative study on the effects of object detection accuracy to the overall MOT performance, using the new large-scale University at Albany DETection and tRACking (UA-DETRAC) benchmark dataset.

Multi-Object Tracking Object +2

Online Object Tracking: A Benchmark

no code implementations CVPR 2013 Yi Wu, Jongwoo Lim, Ming-Hsuan Yang

Object tracking is one of the most important components in numerous applications of computer vision.

Object Object Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.