Search Results for author: Youngseok Kim

Found 9 papers, 5 papers with code

Align-to-Distill: Trainable Attention Alignment for Knowledge Distillation in Neural Machine Translation

1 code implementation3 Mar 2024 Heegon Jin, Seonil Son, Jemin Park, Youngseok Kim, Hyungjong Noh, Yeonsoo Lee

The Attention Alignment Module in A2D performs a dense head-by-head comparison between student and teacher attention heads across layers, turning the combinatorial mapping heuristics into a learning problem.

Knowledge Distillation Machine Translation

Predict to Detect: Prediction-guided 3D Object Detection using Sequential Images

1 code implementation ICCV 2023 Sanmin Kim, Youngseok Kim, In-Jae Lee, Dongsuk Kum

To address this limitation, we propose a novel 3D object detection model, P2D (Predict to Detect), that integrates a prediction scheme into a detection framework to explicitly extract and leverage motion features.

3D Object Detection Autonomous Driving +3

UpCycling: Semi-supervised 3D Object Detection without Sharing Raw-level Unlabeled Scenes

no code implementations ICCV 2023 Sunwook Hwang, Youngseok Kim, Seongwon Kim, Saewoong Bahk, Hyung-Sin Kim

In this paper, we propose UpCycling, a novel SSL framework for 3D object detection with zero additional raw-level point cloud: learning from unlabeled de-identified intermediate features (i. e., smashed data) to preserve privacy.

3D Object Detection Autonomous Driving +3

Boosting Monocular 3D Object Detection with Object-Centric Auxiliary Depth Supervision

no code implementations29 Oct 2022 Youngseok Kim, Sanmin Kim, Sangmin Sim, Jun Won Choi, Dongsuk Kum

In this way, our 3D detection network can be supervised by more depth supervision from raw LiDAR points, which does not require any human annotation cost, to estimate accurate depth without explicitly predicting the depth map.

Depth Estimation Depth Prediction +4

Scale Invariant Power Iteration

no code implementations23 May 2019 Cheolmin Kim, Youngseok Kim, Diego Klabjan

In this work, we introduce a new class of optimization problems called scale invariant problems and prove that they can be efficiently solved by scale invariant power iteration (SCI-PI) with a generalized convergence guarantee of power iteration.

Cannot find the paper you are looking for? You can Submit a new open access paper.