Search Results for author: Jie Tan

Found 44 papers, 11 papers with code

NoRML: No-Reward Meta Learning

1 code implementation4 Mar 2019 Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Jie Tan, Chelsea Finn

To this end, we introduce a method that allows for self-adaptation of learned policies: No-Reward Meta Learning (NoRML).

Meta-Learning Reinforcement Learning (RL)

Datasets and Benchmarks for Offline Safe Reinforcement Learning

3 code implementations15 Jun 2023 Zuxin Liu, Zijian Guo, Haohong Lin, Yihang Yao, Jiacheng Zhu, Zhepeng Cen, Hanjiang Hu, Wenhao Yu, Tingnan Zhang, Jie Tan, Ding Zhao

This paper presents a comprehensive benchmarking suite tailored to offline safe reinforcement learning (RL) challenges, aiming to foster progress in the development and evaluation of safe learning algorithms in both the training and deployment phases.

Autonomous Driving Benchmarking +4

Adaptive Power System Emergency Control using Deep Reinforcement Learning

1 code implementation9 Mar 2019 Qiuhua Huang, Renke Huang, Weituo Hao, Jie Tan, Rui Fan, Zhenyu Huang

Power system emergency control is generally regarded as the last safety net for grid security and resiliency.

Benchmarking reinforcement-learning +1

Fast and Efficient Locomotion via Learned Gait Transitions

1 code implementation9 Apr 2021 Yuxiang Yang, Tingnan Zhang, Erwin Coumans, Jie Tan, Byron Boots

We focus on the problem of developing energy efficient controllers for quadrupedal robots.

Learning to Walk in the Real World with Minimal Human Effort

1 code implementation20 Feb 2020 Sehoon Ha, Peng Xu, Zhenyu Tan, Sergey Levine, Jie Tan

In this paper, we develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort.

Multi-Task Learning

On the Robustness of Safe Reinforcement Learning under Observational Perturbations

1 code implementation29 May 2022 Zuxin Liu, Zijian Guo, Zhepeng Cen, huan zhang, Jie Tan, Bo Li, Ding Zhao

One interesting and counter-intuitive finding is that the maximum reward attack is strong, as it can both induce unsafe behaviors and make the attack stealthy by maintaining the reward.

Adversarial Attack reinforcement-learning +2

Preparing for the Unknown: Learning a Universal Policy with Online System Identification

1 code implementation8 Feb 2017 Wenhao Yu, Jie Tan, C. Karen Liu, Greg Turk

Together, UP-OSI is a robust control policy that can be used across a wide range of dynamic models, and that is also responsive to sudden changes in the environment.

Friction

Learning Fast Adaptation with Meta Strategy Optimization

1 code implementation28 Sep 2019 Wenhao Yu, Jie Tan, Yunfei Bai, Erwin Coumans, Sehoon Ha

The key idea behind MSO is to expose the same adaptation process, Strategy Optimization (SO), to both the training and testing phases.

Meta-Learning

Policies Modulating Trajectory Generators

3 code implementations7 Oct 2019 Atil Iscen, Ken Caluwaerts, Jie Tan, Tingnan Zhang, Erwin Coumans, Vikas Sindhwani, Vincent Vanhoucke

We propose an architecture for learning complex controllable behaviors by having simple Policies Modulate Trajectory Generators (PMTG), a powerful combination that can provide both memory and prior knowledge to the controller.

Large-Scale Evolution of Image Classifiers

2 code implementations ICML 2017 Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin

Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone.

Evolutionary Algorithms Hyperparameter Optimization +3

Learning to Walk via Deep Reinforcement Learning

no code implementations26 Dec 2018 Tuomas Haarnoja, Sehoon Ha, Aurick Zhou, Jie Tan, George Tucker, Sergey Levine

In this paper, we propose a sample-efficient deep RL algorithm based on maximum entropy RL that requires minimal per-task tuning and only a modest number of trials to learn neural network policies.

reinforcement-learning Reinforcement Learning (RL)

Data Efficient Reinforcement Learning for Legged Robots

no code implementations8 Jul 2019 Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Tingnan Zhang, Jie Tan, Vikas Sindhwani

We present a model-based framework for robot locomotion that achieves walking based on only 4. 5 minutes (45, 000 control steps) of data collected on a quadruped robot.

Model Predictive Control reinforcement-learning +2

Zero-shot Imitation Learning from Demonstrations for Legged Robot Visual Navigation

no code implementations27 Sep 2019 Xinlei Pan, Tingnan Zhang, Brian Ichter, Aleksandra Faust, Jie Tan, Sehoon Ha

Here, we propose a zero-shot imitation learning approach for training a visual navigation policy on legged robots from human (third-person perspective) demonstrations, enabling high-quality navigation and cost-effective data collection.

Disentanglement Imitation Learning +1

Rapidly Adaptable Legged Robots via Evolutionary Meta-Learning

no code implementations2 Mar 2020 Xingyou Song, Yuxiang Yang, Krzysztof Choromanski, Ken Caluwaerts, Wenbo Gao, Chelsea Finn, Jie Tan

Learning adaptable policies is crucial for robots to operate autonomously in our complex and quickly changing world.

Meta-Learning

Learning Agile Robotic Locomotion Skills by Imitating Animals

no code implementations2 Apr 2020 Xue Bin Peng, Erwin Coumans, Tingnan Zhang, Tsang-Wei Lee, Jie Tan, Sergey Levine

In this work, we present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.

Domain Adaptation Imitation Learning

Learning Agile Locomotion via Adversarial Training

no code implementations3 Aug 2020 Yujin Tang, Jie Tan, Tatsuya Harada

In contrast to prior works that used only one adversary, we find that training an ensemble of adversaries, each of which specializes in a different escaping strategy, is essential for the protagonist to master agility.

Reinforcement Learning (RL)

Learning to be Safe: Deep RL with a Safety Critic

no code implementations27 Oct 2020 Krishnan Srinivasan, Benjamin Eysenbach, Sehoon Ha, Jie Tan, Chelsea Finn

Safety is an essential component for deploying reinforcement learning (RL) algorithms in real-world scenarios, and is critical during the learning process itself.

Reinforcement Learning (RL) Transfer Learning

How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned

no code implementations4 Feb 2021 Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, Peter Pastor, Sergey Levine

Learning to perceive and move in the real world presents numerous challenges, some of which are easier to address than others, and some of which are often not considered in RL research that focuses only on simulated domains.

reinforcement-learning Reinforcement Learning (RL)

Physics-informed Evolutionary Strategy based Control for Mitigating Delayed Voltage Recovery

no code implementations29 Nov 2021 Yan Du, Qiuhua Huang, Renke Huang, Tianzhixi Yin, Jie Tan, Wenhao Yu, Xinya Li

Reinforcement learning methods have been developed for the same or similar challenging control problems, but they suffer from training inefficiency and lack of robustness for "corner or unseen" scenarios.

Safe Reinforcement Learning for Legged Locomotion

no code implementations5 Mar 2022 Tsung-Yen Yang, Tingnan Zhang, Linda Luu, Sehoon Ha, Jie Tan, Wenhao Yu

In this paper, we propose a safe reinforcement learning framework that switches between a safe recovery policy that prevents the robot from entering unsafe states, and a learner policy that is optimized to complete the task.

reinforcement-learning Reinforcement Learning (RL) +1

Learning Semantics-Aware Locomotion Skills from Human Demonstration

no code implementations27 Jun 2022 Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots

Using only 40 minutes of human demonstration data, our framework learns to adjust the speed and gait of the robot based on perceived terrain semantics, and enables the robot to walk over 6km without failure at close-to-optimal speed.

PI-ARS: Accelerating Evolution-Learned Visual-Locomotion with Predictive Information Representations

no code implementations27 Jul 2022 Kuang-Huei Lee, Ofir Nachum, Tingnan Zhang, Sergio Guadarrama, Jie Tan, Wenhao Yu

Evolution Strategy (ES) algorithms have shown promising results in training complex robotic control policies due to their massive parallelism capability, simple implementation, effective parameter-space exploration, and fast training time.

Representation Learning

Robotic Table Wiping via Reinforcement Learning and Whole-body Trajectory Optimization

no code implementations19 Oct 2022 Thomas Lew, Sumeet Singh, Mario Prats, Jeffrey Bingham, Jonathan Weisz, Benjie Holson, Xiaohan Zhang, Vikas Sindhwani, Yao Lu, Fei Xia, Peng Xu, Tingnan Zhang, Jie Tan, Montserrat Gonzalez

This problem is challenging, as it requires planning wiping actions while reasoning over uncertain latent dynamics of crumbs and spills captured via high-dimensional visual observations.

reinforcement-learning Reinforcement Learning (RL)

Efficient Learning of Voltage Control Strategies via Model-based Deep Reinforcement Learning

no code implementations6 Dec 2022 Ramij R. Hossain, Tianzhixi Yin, Yan Du, Renke Huang, Jie Tan, Wenhao Yu, YuAn Liu, Qiuhua Huang

We propose a novel model-based-DRL framework where a deep neural network (DNN)-based dynamic surrogate model, instead of a real-world power-grid or physics-based simulation, is utilized with the policy learning framework, making the process faster and sample efficient.

Imitation Learning reinforcement-learning +1

Continuous Versatile Jumping Using Learned Action Residuals

no code implementations17 Apr 2023 Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots

Jumping is essential for legged robots to traverse through difficult terrains.

Learning and Adapting Agile Locomotion Skills by Transferring Experience

no code implementations19 Apr 2023 Laura Smith, J. Chase Kew, Tianyu Li, Linda Luu, Xue Bin Peng, Sehoon Ha, Jie Tan, Sergey Levine

Legged robots have enormous potential in their range of capabilities, from navigating unstructured terrains to high-speed running.

Reinforcement Learning (RL)

Language to Rewards for Robotic Skill Synthesis

no code implementations14 Jun 2023 Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, Fei Xia

However, since low-level robot actions are hardware-dependent and underrepresented in LLM training corpora, existing efforts in applying LLMs to robotics have largely treated LLMs as semantic planners or relied on human-engineered control primitives to interface with the robot.

In-Context Learning Logical Reasoning

Transforming a Quadruped into a Guide Robot for the Visually Impaired: Formalizing Wayfinding, Interaction Modeling, and Safety Mechanism

no code implementations24 Jun 2023 J. Taery Kim, Wenhao Yu, Yash Kothari, Jie Tan, Greg Turk, Sehoon Ha

To build a successful guide robot, our paper explores three key topics: (1) formalizing the navigation mechanism of a guide dog and a human, (2) developing a data-driven model of their interaction, and (3) improving user safety.

Discovering Adaptable Symbolic Algorithms from Scratch

no code implementations31 Jul 2023 Stephen Kelly, Daniel S. Park, Xingyou Song, Mitchell McIntire, Pranav Nashikkar, Ritam Guha, Wolfgang Banzhaf, Kalyanmoy Deb, Vishnu Naresh Boddeti, Jie Tan, Esteban Real

We evolve modular policies that tune their model parameters and alter their inference algorithm on-the-fly to adapt to sudden environmental changes.

AutoML

Toward Intelligent Emergency Control for Large-scale Power Systems: Convergence of Learning, Physics, Computing and Control

no code implementations8 Oct 2023 Qiuhua Huang, Renke Huang, Tianzhixi Yin, Sohom Datta, Xueqing Sun, Jason Hou, Jie Tan, Wenhao Yu, YuAn Liu, Xinya Li, Bruce Palmer, Ang Li, Xinda Ke, Marianna Vaiman, Song Wang, Yousu Chen

Our developed methods and platform based on the convergence framework have been applied to a large (more than 3000 buses) Texas power system, and tested with 56000 scenarios.

Towards Inferring Users' Impressions of Robot Performance in Navigation Scenarios

no code implementations17 Oct 2023 Qiping Zhang, Nathan Tsoi, Booyeon Choi, Jie Tan, Hao-Tien Lewis Chiang, Marynel Vázquez

As a more scalable and cost-effective alternative, we study the possibility of predicting people's impressions of robot behavior using non-verbal behavioral cues and machine learning techniques.

Binary Classification

Creative Robot Tool Use with Large Language Models

no code implementations19 Oct 2023 Mengdi Xu, Peide Huang, Wenhao Yu, Shiqi Liu, Xilun Zhang, Yaru Niu, Tingnan Zhang, Fei Xia, Jie Tan, Ding Zhao

This paper investigates the feasibility of imbuing robots with the ability to creatively use tools in tasks that involve implicit physical constraints and long-term planning.

Motion Planning Task and Motion Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.