Search Results for author: Junsu Kim

Found 14 papers, 6 papers with code

LoRA Training Provably Converges to a Low-Rank Global Minimum or It Fails Loudly (But it Probably Won't Fail)

no code implementations13 Feb 2025 Junsu Kim, Jaeyeon Kim, Ernest K. Ryu

Low-rank adaptation (LoRA) has become a standard approach for fine-tuning large foundation models.

B-RIGHT: Benchmark Re-evaluation for Integrity in Generalized Human-Object Interaction Testing

1 code implementation28 Jan 2025 Yoojin Jang, Junsu Kim, Hayeon Kim, Eun-ki Lee, Eun-Sol Kim, Seungryul Baek, Jaejun Yoo

Human-object interaction (HOI) is an essential problem in artificial intelligence (AI) which aims to understand the visual world that involves complex relationships between humans and objects.

Human-Object Interaction Detection

Unsupervised-to-Online Reinforcement Learning

no code implementations27 Aug 2024 Junsu Kim, Seohong Park, Sergey Levine

Offline-to-online reinforcement learning (RL), a framework that trains a policy with offline RL and then further fine-tunes it with online RL, has been considered a promising recipe for data-driven decision-making.

Offline RL reinforcement-learning +2

VPOcc: Exploiting Vanishing Point for Monocular 3D Semantic Occupancy Prediction

no code implementations7 Aug 2024 Junsu Kim, Junhee Lee, Ukcheol Shin, Jean Oh, Kyungdon Joo

First, in the VPZoomer module, we initially utilize VP in feature extraction to achieve information balanced feature extraction across the scene by generating a zoom-in image based on VP.

3D Semantic Occupancy Prediction

Visual Representation Learning with Stochastic Frame Prediction

no code implementations11 Jun 2024 Huiwon Jang, Dongyoung Kim, Junsu Kim, Jinwoo Shin, Pieter Abbeel, Younggyo Seo

To tackle this challenge, in this paper, we revisit the idea of stochastic video generation that learns to capture uncertainty in frame prediction and explore its effectiveness for representation learning.

Decoder Pose Tracking +6

VLM-PL: Advanced Pseudo Labeling Approach for Class Incremental Object Detection via Vision-Language Model

no code implementations8 Mar 2024 Junsu Kim, Yunhoe Ku, Jihyeon Kim, Junuk Cha, Seungryul Baek

This technique uses Vision-Language Model (VLM) to verify the correctness of pseudo ground-truths (GTs) without requiring additional model training.

Class-Incremental Object Detection Incremental Learning +4

SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection

no code implementations CVPR 2024 Junsu Kim, Hoseong Cho, Jihyeon Kim, Yihalem Yimolal Tiruneh, Seungryul Baek

In the field of class incremental learning (CIL), generative replay has become increasingly prominent as a method to mitigate the catastrophic forgetting, alongside the continuous improvements in generative models.

class-incremental learning Class Incremental Learning +5

Imitating Graph-Based Planning with Goal-Conditioned Policies

1 code implementation20 Mar 2023 Junsu Kim, Younggyo Seo, Sungsoo Ahn, Kyunghwan Son, Jinwoo Shin

Recently, graph-based planning algorithms have gained much attention to solve goal-conditioned reinforcement learning (RL) tasks: they provide a sequence of subgoals to reach the target-goal, and the agents learn to execute subgoal-conditioned policies.

Reinforcement Learning (RL)

Multi-View Masked World Models for Visual Robotic Manipulation

1 code implementation5 Feb 2023 Younggyo Seo, Junsu Kim, Stephen James, Kimin Lee, Jinwoo Shin, Pieter Abbeel

In this paper, we investigate how to learn good representations with multi-view data and utilize them for visual robotic manipulation.

Camera Calibration Representation Learning

Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning

1 code implementation NeurIPS 2021 Junsu Kim, Younggyo Seo, Jinwoo Shin

In this paper, we present HIerarchical reinforcement learning Guided by Landmarks (HIGL), a novel framework for training a high-level policy with a reduced action space guided by landmarks, i. e., promising states to explore.

Efficient Exploration Hierarchical Reinforcement Learning +3

Disentangling Sources of Risk for Distributional Multi-Agent Reinforcement Learning

no code implementations29 Sep 2021 Kyunghwan Son, Junsu Kim, Yung Yi, Jinwoo Shin

Although these two sources are both important factors for learning robust policies of agents, prior works do not separate them or deal with only a single risk source, which could lead to suboptimal equilibria.

Multi-agent Reinforcement Learning quantile regression +5

Self-Improved Retrosynthetic Planning

1 code implementation9 Jun 2021 Junsu Kim, Sungsoo Ahn, Hankook Lee, Jinwoo Shin

Our main idea is based on a self-improving procedure that trains the model to imitate successful trajectories found by itself.

Multi-step retrosynthesis valid

Guiding Deep Molecular Optimization with Genetic Exploration

2 code implementations NeurIPS 2020 Sungsoo Ahn, Junsu Kim, Hankook Lee, Jinwoo Shin

De novo molecular design attempts to search over the chemical space for molecules with the desired property.

Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.