Search Results for author: Jun Won Choi

Found 31 papers, 13 papers with code

JARViS: Detecting Actions in Video Using Unified Actor-Scene Context Relation Modeling

no code implementations7 Aug 2024 Seok Hwan Lee, Taein Son, Soo Won Seo, Jisong Kim, Jun Won Choi

Video action detection (VAD) is a formidable vision task that involves the localization and classification of actions within the spatial and temporal dimensions of a video clip.

Action Detection Relation

Distribution-Aware Robust Learning from Long-Tailed Data with Noisy Labels

1 code implementation23 Jul 2024 Jae Soon Baik, In Young Yoon, Kun Hoon Kim, Jun Won Choi

The performance of these methods is limited because they use only the training samples within each class for class centroid estimation, making the quality of centroids susceptible to long-tailed distributions and noisy labels.

Contrastive Learning

Mask2Map: Vectorized HD Map Construction Using Bird's Eye View Segmentation Masks

1 code implementation18 Jul 2024 Sehwan Choi, Jungho Kim, Hongjae Shin, Jun Won Choi

PQG extracts instance-level positional queries by embedding BEV positional information into Mask-Aware Queries, while GFE utilizes BEV Segmentation Masks to generate point-level geometric features.

Autonomous Driving Denoising +1

Semi-Supervised Domain Adaptation Using Target-Oriented Domain Augmentation for 3D Object Detection

1 code implementation17 Jun 2024 Yecheol Kim, Junho Lee, Changsoo Park, Hyoung won Kim, Inho Lim, Christopher Chang, Jun Won Choi

TODA efficiently utilizes all available data, including labeled data in the source domain, and both labeled data and unlabeled data in the target domain to enhance domain adaptation performance.

3D Object Detection Autonomous Driving +3

Fine-Grained Pillar Feature Encoding Via Spatio-Temporal Virtual Grid for 3D Object Detection

1 code implementation11 Mar 2024 Konyul Park, Yecheol Kim, Junho Koh, Byungwoo Park, Jun Won Choi

Through STV grids, points within each pillar are individually encoded using Vertical PFE (V-PFE), Temporal PFE (T-PFE), and Horizontal PFE (H-PFE).

3D Object Detection Autonomous Vehicles +2

PillarGen: Enhancing Radar Point Cloud Density and Quality via Pillar-based Point Generation Network

no code implementations4 Mar 2024 Jisong Kim, Geonho Bang, Kwangjin Choi, Minjae Seong, Jaechang Yoo, Eunjong Pyo, Jun Won Choi

The PillarGen model performs the following three steps: 1) pillar encoding, 2) Occupied Pillar Prediction (OPP), and 3) Pillar to Point Generation (PPG).

object-detection Object Detection

RCM-Fusion: Radar-Camera Multi-Level Fusion for 3D Object Detection

no code implementations17 Jul 2023 Jisong Kim, Minjae Seong, Geonho Bang, Dongsuk Kum, Jun Won Choi

While LiDAR sensors have been successfully applied to 3D object detection, the affordability of radar and camera sensors has led to a growing interest in fusing radars and cameras for 3D object detection.

3D Object Detection Object +1

SPADE: Sparse Pillar-based 3D Object Detection Accelerator for Autonomous Driving

no code implementations12 May 2023 Minjae Lee, Seongmin Park, Hyungmin Kim, Minyong Yoon, Janghwan Lee, Jun Won Choi, Nam Sung Kim, Mingu Kang, Jungwook Choi

3D object detection using point cloud (PC) data is essential for perception pipelines of autonomous driving, where efficient encoding is key to meeting stringent resource and latency requirements.

3D Object Detection Autonomous Driving +2

MGTANet: Encoding Sequential LiDAR Points Using Long Short-Term Motion-Guided Temporal Attention for 3D Object Detection

1 code implementation1 Dec 2022 Junho Koh, Junhyung Lee, Youngwoo Lee, Jaekyum Kim, Jun Won Choi

While conventional 3D object detectors use a set of unordered LiDAR points acquired over a fixed time interval, recent studies have revealed that substantial performance improvement can be achieved by exploiting the spatio-temporal context present in a sequence of LiDAR point sets.

3D Object Detection Object +1

R-Pred: Two-Stage Motion Prediction Via Tube-Query Attention-Based Trajectory Refinement

no code implementations ICCV 2023 Sehwan Choi, Jungho Kim, Junyong Yun, Jun Won Choi

The trajectory refinement network enhances each of the M proposals using 1) tube-query scene attention (TQSA) and 2) proposal-level interaction attention (PIA) mechanisms.

Motion Forecasting Motion Planning +1

Boosting Monocular 3D Object Detection with Object-Centric Auxiliary Depth Supervision

no code implementations29 Oct 2022 Youngseok Kim, Sanmin Kim, Sangmin Sim, Jun Won Choi, Dongsuk Kum

In this way, our 3D detection network can be supervised by more depth supervision from raw LiDAR points, which does not require any human annotation cost, to estimate accurate depth without explicitly predicting the depth map.

Depth Estimation Depth Prediction +4

Learning from Data with Noisy Labels Using Temporal Self-Ensemble

no code implementations21 Jul 2022 Jun Ho Lee, Jae Soon Baik, Tae Hwan Hwang, Jun Won Choi

By combining the aforementioned metrics, we present the proposed {\it self-ensemble-based robust training} (SRT) method, which can filter the samples with noisy labels to reduce their influence on training.

ST-CoNAL: Consistency-Based Acquisition Criterion Using Temporal Self-Ensemble for Active Learning

no code implementations5 Jul 2022 Jae Soon Baik, In Young Yoon, Jun Won Choi

The student models are given a fixed number of temporal self-ensemble models, and the teacher model is constructed by averaging the weights of the student models.

Active Learning Image Classification

Joint 3D Object Detection and Tracking Using Spatio-Temporal Representation of Camera Image and LiDAR Point Clouds

no code implementations14 Dec 2021 Junho Koh, Jaekyum Kim, Jinhyuk Yoo, Yecheol Kim, Jun Won Choi

The detector constructs the spatio-temporal features via the weighted temporal aggregation of the spatial features obtained by the camera and LiDAR fusion.

3D Object Detection Graph Neural Network +2

LaPred: Lane-Aware Prediction of Multi-Modal Future Trajectories of Dynamic Agents

1 code implementation CVPR 2021 ByeoungDo Kim, Seong Hyeon Park, Seokhwan Lee, Elbek Khoshimjonov, Dongsuk Kum, Junsoo Kim, Jeong Soo Kim, Jun Won Choi

In this paper, we address the problem of predicting the future motion of a dynamic agent (called a target agent) given its current and past states as well as the information on its environment.

Self-Supervised Learning

Deep Learning-based Beam Tracking for Millimeter-wave Communications under Mobility

no code implementations19 Feb 2021 Sun Hong Lim, Sunwoo Kim, Byonghyo Shim, Jun Won Choi

In this paper, we propose a deep learning-based beam tracking method for millimeter-wave (mmWave)communications.

Joint Representation of Temporal Image Sequences and Object Motion for Video Object Detection

1 code implementation20 Nov 2020 Junho Koh, Jaekyum Kim, Younji Shin, Byeongwon Lee, Seungji Yang, Jun Won Choi

In this paper, we propose a new video object detector (VoD) method referred to as temporal feature aggregation and motion-aware VoD (TM-VoD), which produces a joint representation of temporal image sequences and object motion.

Object object-detection +1

3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection

1 code implementation ECCV 2020 Jin Hyeok Yoo, Yecheol Kim, Jisong Kim, Jun Won Choi

First, the method employs auto-calibrated projection, to transform the 2D camera features to a smooth spatial feature map with the highest correspondence to the LiDAR features in the bird's eye view (BEV) domain.

3D Object Detection object-detection

Robust Deep Multi-modal Learning Based on Gated Information Fusion Network

no code implementations17 Jul 2018 Jaekyum Kim, Junho Koh, Yecheol Kim, Jaehyung Choi, Youngbae Hwang, Jun Won Choi

The goal of multi-modal learning is to use complimentary information on the relevant task provided by the multiple modalities to achieve reliable and robust performance.

Data Augmentation object-detection +1

Sequence-to-Sequence Prediction of Vehicle Trajectory via LSTM Encoder-Decoder Architecture

no code implementations18 Feb 2018 Seong Hyeon Park, ByeongDo Kim, Chang Mook Kang, Chung Choo Chung, Jun Won Choi

We employ the encoder-decoder architecture which analyzes the pattern underlying in the past trajectory using the long short-term memory (LSTM) based encoder and generates the future trajectory sequence using the LSTM based decoder.

Decoder Trajectory Prediction

Probabilistic Vehicle Trajectory Prediction over Occupancy Grid Map via Recurrent Neural Network

no code implementations24 Apr 2017 ByeoungDo Kim, Chang Mook Kang, Seung Hi Lee, Hyunmin Chae, Jaekyum Kim, Chung Choo Chung, Jun Won Choi

Our approach is data-driven and simple to use in that it learns complex behavior of the vehicles from the massive amount of trajectory data through deep neural network model.

Model Optimization Trajectory Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.