Search Results for author: Jigang Kim

Found 5 papers, 3 papers with code

Distributed multi-agent target search and tracking with Gaussian process and reinforcement learning

no code implementations29 Aug 2023 Jigang Kim, Dohyun Jang, H. Jin Kim

Deploying multiple robots for target search and tracking has many practical applications, yet the challenge of planning over unknown or partially known targets remains difficult to address.

Decision Making Multi-agent Reinforcement Learning +1

Demonstration-free Autonomous Reinforcement Learning via Implicit and Bidirectional Curriculum

1 code implementation17 May 2023 Jigang Kim, Daesol Cho, H. Jin Kim

While reinforcement learning (RL) has achieved great success in acquiring complex skills solely from environmental interactions, it assumes that resets to the initial state are readily available at the end of each episode.

reinforcement-learning Reinforcement Learning (RL)

DHRL: A Graph-Based Approach for Long-Horizon and Sparse Hierarchical Reinforcement Learning

1 code implementation11 Oct 2022 Seungjae Lee, Jigang Kim, Inkyu Jang, H. Jin Kim

Hierarchical Reinforcement Learning (HRL) has made notable progress in complex control tasks by leveraging temporal abstraction.

Hierarchical Reinforcement Learning reinforcement-learning +1

Unsupervised Reinforcement Learning for Transferable Manipulation Skill Discovery

no code implementations29 Apr 2022 Daesol Cho, Jigang Kim, H. Jin Kim

Current reinforcement learning (RL) in robotics often experiences difficulty in generalizing to new downstream tasks due to the innate task-specific training paradigm.

reinforcement-learning Reinforcement Learning (RL) +1

Automating Reinforcement Learning with Example-based Resets

1 code implementation5 Apr 2022 Jigang Kim, J. Hyeon Park, Daesol Cho, H. Jin Kim

Deep reinforcement learning has enabled robots to learn motor skills from environmental interactions with minimal to no prior knowledge.

Continuous Control reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.