Search Results for author: Junsu Kim

Found 9 papers, 5 papers with code

VLM-PL: Advanced Pseudo Labeling approach Class Incremental Object Detection with Vision-Language Model

no code implementations8 Mar 2024 Junsu Kim, Yunhoe Ku, Jihyeon Kim, Junuk Cha, Seungryul Baek

This technique uses Vision-Language Model (VLM) to verify the correctness of pseudo ground-truths (GTs) without requiring additional model training.

Class-Incremental Object Detection Incremental Learning +3

SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection

no code implementations27 Feb 2024 Junsu Kim, Hoseong Cho, Jihyeon Kim, Yihalem Yimolal Tiruneh, Seungryul Baek

In the field of class incremental learning (CIL), genera- tive replay has become increasingly prominent as a method to mitigate the catastrophic forgetting, alongside the con- tinuous improvements in generative models.

Class Incremental Learning Class-Incremental Object Detection +3

Imitating Graph-Based Planning with Goal-Conditioned Policies

1 code implementation20 Mar 2023 Junsu Kim, Younggyo Seo, Sungsoo Ahn, Kyunghwan Son, Jinwoo Shin

Recently, graph-based planning algorithms have gained much attention to solve goal-conditioned reinforcement learning (RL) tasks: they provide a sequence of subgoals to reach the target-goal, and the agents learn to execute subgoal-conditioned policies.

Reinforcement Learning (RL)

Multi-View Masked World Models for Visual Robotic Manipulation

1 code implementation5 Feb 2023 Younggyo Seo, Junsu Kim, Stephen James, Kimin Lee, Jinwoo Shin, Pieter Abbeel

In this paper, we investigate how to learn good representations with multi-view data and utilize them for visual robotic manipulation.

Camera Calibration Representation Learning

Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning

1 code implementation NeurIPS 2021 Junsu Kim, Younggyo Seo, Jinwoo Shin

In this paper, we present HIerarchical reinforcement learning Guided by Landmarks (HIGL), a novel framework for training a high-level policy with a reduced action space guided by landmarks, i. e., promising states to explore.

Efficient Exploration Hierarchical Reinforcement Learning +2

Disentangling Sources of Risk for Distributional Multi-Agent Reinforcement Learning

no code implementations29 Sep 2021 Kyunghwan Son, Junsu Kim, Yung Yi, Jinwoo Shin

Although these two sources are both important factors for learning robust policies of agents, prior works do not separate them or deal with only a single risk source, which could lead to suboptimal equilibria.

Multi-agent Reinforcement Learning reinforcement-learning +3

Self-Improved Retrosynthetic Planning

1 code implementation9 Jun 2021 Junsu Kim, Sungsoo Ahn, Hankook Lee, Jinwoo Shin

Our main idea is based on a self-improving procedure that trains the model to imitate successful trajectories found by itself.

Multi-step retrosynthesis valid

Guiding Deep Molecular Optimization with Genetic Exploration

2 code implementations NeurIPS 2020 Sungsoo Ahn, Junsu Kim, Hankook Lee, Jinwoo Shin

De novo molecular design attempts to search over the chemical space for molecules with the desired property.

Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.