no code implementations • 18 Feb 2025 • Yihong Liu, Dongyeop Kang, Sehoon Ha
Autonomous robotic wiping is an important task in various industries, ranging from industrial manufacturing to sanitization in healthcare.
no code implementations • 17 Feb 2025 • Morgan Byrd, Jackson Crandell, Mili Das, Jessica Inman, Robert Wright, Sehoon Ha
Numerous real-world control problems involve dynamics and objectives affected by unobservable hidden pa- rameters, ranging from autonomous driving to robotic manipu- lation, which cause performance degradation during sim-to-real transfer.
no code implementations • 22 Sep 2024 • Naoki Yokoyama, Ram Ramrakhya, Abhishek Das, Dhruv Batra, Sehoon Ha
We present the Habitat-Matterport 3D Open Vocabulary Object Goal Navigation dataset (HM3D-OVON), a large-scale benchmark that broadens the scope and semantic range of prior Object Goal Navigation (ObjectNav) benchmarks.
no code implementations • 14 Sep 2024 • Juntao He, Baxi Chong, Zhaochen Xu, Sehoon Ha, Daniel I. Goldman
Here we develop a MuJoCo-based simulator tailored to this robotic platform and use the simulation to develop a reinforcement learning-based control framework that dynamically adjusts horizontal and vertical body undulation, and limb stepping in real-time.
no code implementations • 7 Jun 2024 • Seungeun Rho, Laura Smith, Tianyu Li, Sergey Levine, Xue Bin Peng, Sehoon Ha
In this sense, we introduce Language Guided Skill Discovery (LGSD), a skill discovery framework that aims to directly maximize the semantic diversity between skills.
1 code implementation • 6 Dec 2023 • Naoki Yokoyama, Sehoon Ha, Dhruv Batra, Jiuguang Wang, Bernadette Bucher
Understanding how humans leverage semantic knowledge to navigate unfamiliar environments and decide where to explore next is pivotal for developing robots capable of human-like search behaviors.
no code implementations • CVPR 2024 • Tianyu Li, Calvin Qiao, Guanqiao Ren, KangKang Yin, Sehoon Ha
This paper introduces the Accelerated Auto-regressive Motion Diffusion Model (AAMDM), a novel motion synthesis framework designed to achieve quality, diversity, and efficiency all together.
no code implementations • 16 Oct 2023 • Tianle Huang, Nitish Sontakke, K. Niranjan Kumar, Irfan Essa, Stefanos Nikolaidis, Dennis W. Hong, Sehoon Ha
Domain randomization (DR), which entails training a policy with randomized dynamics, has proven to be a simple yet effective algorithm for reducing the gap between simulation and the real world.
no code implementations • 29 Jun 2023 • Anthony Francis, Claudia Pérez-D'Arpino, Chengshu Li, Fei Xia, Alexandre Alahi, Rachid Alami, Aniket Bera, Abhijat Biswas, Joydeep Biswas, Rohan Chandra, Hao-Tien Lewis Chiang, Michael Everett, Sehoon Ha, Justin Hart, Jonathan P. How, Haresh Karnan, Tsang-Wei Edward Lee, Luis J. Manso, Reuth Mirksy, Sören Pirk, Phani Teja Singamaneni, Peter Stone, Ada V. Taylor, Peter Trautman, Nathan Tsoi, Marynel Vázquez, Xuesu Xiao, Peng Xu, Naoki Yokoyama, Alexander Toshev, Roberto Martín-Martín
A major challenge to deploying robots widely is navigation in human-populated environments, commonly referred to as social robot navigation.
no code implementations • 24 Jun 2023 • J. Taery Kim, Wenhao Yu, Yash Kothari, Jie Tan, Greg Turk, Sehoon Ha
To build a successful guide robot, our paper explores three key topics: (1) formalizing the navigation mechanism of a guide dog and a human, (2) developing a data-driven model of their interaction, and (3) improving user safety.
no code implementations • 19 Apr 2023 • Laura Smith, J. Chase Kew, Tianyu Li, Linda Luu, Xue Bin Peng, Sehoon Ha, Jie Tan, Sergey Levine
Legged robots have enormous potential in their range of capabilities, from navigating unstructured terrains to high-speed running.
no code implementations • 16 Mar 2023 • Nitish Sontakke, Hosik Chae, Sangjoon Lee, Tianle Huang, Dennis W. Hong, Sehoon Ha
In this work, we demonstrate robust sim-to-real transfer of control policies on the BALLU robots via system identification and our novel residual physics learning method, Environment Mimic (EnvMimic).
no code implementations • 14 Mar 2023 • Yunbo Zhang, Alexander Clegg, Sehoon Ha, Greg Turk, Yuting Ye
In-hand object manipulation is challenging to simulate due to complex contact dynamics, non-repetitive finger gaits, and the need to indirectly control unactuated objects.
no code implementations • 17 Dec 2022 • K. Niranjan Kumar, Irfan Essa, Sehoon Ha
Real-world autonomous missions often require rich interaction with nearby objects, such as doors or switches, along with effective navigation.
1 code implementation • 26 Oct 2022 • Simar Kareer, Naoki Yokoyama, Dhruv Batra, Sehoon Ha, Joanne Truong
ViNL consists of: (1) a visual navigation policy that outputs linear and angular velocity commands that guides the robot to a goal coordinate in unfamiliar indoor environments; and (2) a visual locomotion policy that controls the robot's joints to avoid stepping on obstacles while following provided velocity commands.
1 code implementation • 12 Sep 2022 • Taylor Hearn, Sravan Jayanthi, Sehoon Ha
The capacity for rapid domain adaptation is important to increasing the applicability of reinforcement learning (RL) to real world problems.
no code implementations • 5 Mar 2022 • Tsung-Yen Yang, Tingnan Zhang, Linda Luu, Sehoon Ha, Jie Tan, Wenhao Yu
In this paper, we propose a safe reinforcement learning framework that switches between a safe recovery policy that prevents the robot from entering unsafe states, and a learner policy that is optimized to complete the task.
no code implementations • 29 Sep 2021 • Hao-Lun Hsu, Qiuhua Huang, Sehoon Ha
One of the key challenges to deep reinforcement learning (deep RL) is to ensure safety at both training and testing phases.
no code implementations • 26 Sep 2021 • Ren Liu, Nitish Sontakke, Sehoon Ha
Compared with the TGs, FSMs offer high-level management on each leg motion generator and enable a flexible state arrangement, which makes the learned behavior less vulnerable to unseen perturbations or challenging terrains.
no code implementations • 22 Sep 2021 • Naoki Yokoyama, Qian Luo, Dhruv Batra, Sehoon Ha
Recent advances in deep reinforcement learning and scalable photorealistic simulation have led to increasingly mature embodied AI for various visual tasks, including navigation.
no code implementations • 14 Mar 2021 • Naoki Yokoyama, Sehoon Ha, Dhruv Batra
Several related works on navigation have used Success weighted by Path Length (SPL) as the primary method of evaluating the path an agent makes to a goal location, but SPL is limited in its ability to properly evaluate agents with complex dynamics.
no code implementations • 13 Mar 2021 • Visak Kumar, Sehoon Ha, C. Karen Liu
An EAP takes as input the predicted future state error in the target environment, which is provided by an error-prediction function, simultaneously trained with the EAP.
no code implementations • 1 Jan 2021 • Miguel Angel Zamora Mora, Momchil Peychev, Sehoon Ha, Martin Vechev, Stelian Coros
Current reinforcement learning (RL) methods use simulation models as simple black-box oracles.
no code implementations • 6 Nov 2020 • Qian Luo, Maks Sorokin, Sehoon Ha
Therefore, learning a navigation policy for a new robot with a new sensor configuration or a new target still remains a challenging problem.
no code implementations • 2 Nov 2020 • Joanne Taery Kim, Sehoon Ha
Recent advances in deep reinforcement learning (deep RL) enable researchers to solve challenging control problems, from simulated environments to real-world robotic tasks.
no code implementations • 27 Oct 2020 • Krishnan Srinivasan, Benjamin Eysenbach, Sehoon Ha, Jie Tan, Chelsea Finn
Safety is an essential component for deploying reinforcement learning (RL) algorithms in real-world scenarios, and is critical during the learning process itself.
1 code implementation • 20 Feb 2020 • Sehoon Ha, Peng Xu, Zhenyu Tan, Sergey Levine, Jie Tan
In this paper, we develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort.
1 code implementation • 28 Sep 2019 • Wenhao Yu, Jie Tan, Yunfei Bai, Erwin Coumans, Sehoon Ha
The key idea behind MSO is to expose the same adaptation process, Strategy Optimization (SO), to both the training and testing phases.
no code implementations • 27 Sep 2019 • Xinlei Pan, Tingnan Zhang, Brian Ichter, Aleksandra Faust, Jie Tan, Sehoon Ha
Here, we propose a zero-shot imitation learning approach for training a visual navigation policy on legged robots from human (third-person perspective) demonstrations, enabling high-quality navigation and cost-effective data collection.
no code implementations • 9 Jul 2019 • K. Niranjan Kumar, Irfan Essa, Sehoon Ha, C. Karen Liu
Using our method, we train a robotic arm to estimate the mass distribution of an object with moving parts (e. g. an articulated rigid body system) by pushing it on a surface with unknown friction properties.
no code implementations • 26 Dec 2018 • Tuomas Haarnoja, Sehoon Ha, Aurick Zhou, Jie Tan, George Tucker, Sergey Levine
In this paper, we propose a sample-efficient deep RL algorithm based on maximum entropy RL that requires minimal per-task tuning and only a modest number of trials to learn neural network policies.
54 code implementations • 13 Dec 2018 • Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, Sergey Levine
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
no code implementations • 8 Mar 2017 • Visak CV Kumar, Sehoon Ha, C. Karen Liu
With this mixture of actor-critic architecture, the discrete contact sequence planning is solved through the selection of the best critics while the continuous control problem is solved by the optimization of actors.