Search Results for author: Jie Tan

Found 51 papers, 13 papers with code

Natural Language-Assisted Multi-modal Medication Recommendation

1 code implementation13 Jan 2025 Jie Tan, Yu Rong, Kangfei Zhao, Tian Bian, Tingyang Xu, Junzhou Huang, Hong Cheng, Helen Meng

Specifically, NLA-MMR formulates CMR as an alignment problem from patient and medication modalities.

Learning Multi-Agent Loco-Manipulation for Long-Horizon Quadrupedal Pushing

no code implementations11 Nov 2024 Yuming Feng, Chuye Hong, Yaru Niu, Shiqi Liu, Yuxiang Yang, Wenhao Yu, Tingnan Zhang, Jie Tan, Ding Zhao

Recently, quadrupedal locomotion has achieved significant success, but their manipulation capabilities, particularly in handling large objects, remain limited, restricting their usefulness in demanding real-world applications such as search and rescue, construction, industrial automation, and room organization.

Multi-agent Reinforcement Learning

Adaptive Coordinators and Prompts on Heterogeneous Graphs for Cross-Domain Recommendations

1 code implementation15 Oct 2024 Hengyu Zhang, Chunxu Shen, Xiangguo Sun, Jie Tan, Yu Rong, Chengzhi Piao, Hong Cheng, Lingling Yi

However, integrating multi-domain knowledge for the cross-domain recommendation is very hard due to inherent disparities in user behavior and item characteristics and the risk of negative transfer, where irrelevant or conflicting information from the source domains adversely impacts the target domain's performance.

Recommendation Systems

Gameplay Filters: Robust Zero-Shot Safety through Adversarial Imagination

no code implementations1 May 2024 Duy P. Nguyen, Kai-Chieh Hsu, Wenhao Yu, Jie Tan, Jaime F. Fisac

Despite the impressive recent advances in learning-based robot control, ensuring robustness to out-of-distribution conditions remains an open challenge.

Creative Robot Tool Use with Large Language Models

no code implementations19 Oct 2023 Mengdi Xu, Peide Huang, Wenhao Yu, Shiqi Liu, Xilun Zhang, Yaru Niu, Tingnan Zhang, Fei Xia, Jie Tan, Ding Zhao

This paper investigates the feasibility of imbuing robots with the ability to creatively use tools in tasks that involve implicit physical constraints and long-term planning.

Motion Planning Task and Motion Planning

Predicting Human Impressions of Robot Performance During Navigation Tasks

no code implementations17 Oct 2023 Qiping Zhang, Nathan Tsoi, Mofeed Nagib, Booyeon Choi, Jie Tan, Hao-Tien Lewis Chiang, Marynel Vázquez

Further, when predicting robot performance as a binary classification task on unseen users' data, the F1 Score of machine learning models more than doubled in comparison to predicting performance on a 5-point scale.

Binary Classification

Toward Intelligent Emergency Control for Large-scale Power Systems: Convergence of Learning, Physics, Computing and Control

no code implementations8 Oct 2023 Qiuhua Huang, Renke Huang, Tianzhixi Yin, Sohom Datta, Xueqing Sun, Jason Hou, Jie Tan, Wenhao Yu, YuAn Liu, Xinya Li, Bruce Palmer, Ang Li, Xinda Ke, Marianna Vaiman, Song Wang, Yousu Chen

Our developed methods and platform based on the convergence framework have been applied to a large (more than 3000 buses) Texas power system, and tested with 56000 scenarios.

Discovering Adaptable Symbolic Algorithms from Scratch

no code implementations31 Jul 2023 Stephen Kelly, Daniel S. Park, Xingyou Song, Mitchell McIntire, Pranav Nashikkar, Ritam Guha, Wolfgang Banzhaf, Kalyanmoy Deb, Vishnu Naresh Boddeti, Jie Tan, Esteban Real

We evolve modular policies that tune their model parameters and alter their inference algorithm on-the-fly to adapt to sudden environmental changes.

AutoML

Transforming a Quadruped into a Guide Robot for the Visually Impaired: Formalizing Wayfinding, Interaction Modeling, and Safety Mechanism

no code implementations24 Jun 2023 J. Taery Kim, Wenhao Yu, Yash Kothari, Jie Tan, Greg Turk, Sehoon Ha

To build a successful guide robot, our paper explores three key topics: (1) formalizing the navigation mechanism of a guide dog and a human, (2) developing a data-driven model of their interaction, and (3) improving user safety.

Datasets and Benchmarks for Offline Safe Reinforcement Learning

3 code implementations15 Jun 2023 Zuxin Liu, Zijian Guo, Haohong Lin, Yihang Yao, Jiacheng Zhu, Zhepeng Cen, Hanjiang Hu, Wenhao Yu, Tingnan Zhang, Jie Tan, Ding Zhao

This paper presents a comprehensive benchmarking suite tailored to offline safe reinforcement learning (RL) challenges, aiming to foster progress in the development and evaluation of safe learning algorithms in both the training and deployment phases.

Autonomous Driving Benchmarking +5

Language to Rewards for Robotic Skill Synthesis

no code implementations14 Jun 2023 Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, Fei Xia

However, since low-level robot actions are hardware-dependent and underrepresented in LLM training corpora, existing efforts in applying LLMs to robotics have largely treated LLMs as semantic planners or relied on human-engineered control primitives to interface with the robot.

In-Context Learning Logical Reasoning

Learning and Adapting Agile Locomotion Skills by Transferring Experience

no code implementations19 Apr 2023 Laura Smith, J. Chase Kew, Tianyu Li, Linda Luu, Xue Bin Peng, Sehoon Ha, Jie Tan, Sergey Levine

Legged robots have enormous potential in their range of capabilities, from navigating unstructured terrains to high-speed running.

Reinforcement Learning (RL)

Continuous Versatile Jumping Using Learned Action Residuals

no code implementations17 Apr 2023 Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots

Jumping is essential for legged robots to traverse through difficult terrains.

Efficient Learning of Voltage Control Strategies via Model-based Deep Reinforcement Learning

no code implementations6 Dec 2022 Ramij R. Hossain, Tianzhixi Yin, Yan Du, Renke Huang, Jie Tan, Wenhao Yu, YuAn Liu, Qiuhua Huang

We propose a novel model-based-DRL framework where a deep neural network (DNN)-based dynamic surrogate model, instead of a real-world power-grid or physics-based simulation, is utilized with the policy learning framework, making the process faster and sample efficient.

Deep Reinforcement Learning Imitation Learning +2

Robotic Table Wiping via Reinforcement Learning and Whole-body Trajectory Optimization

no code implementations19 Oct 2022 Thomas Lew, Sumeet Singh, Mario Prats, Jeffrey Bingham, Jonathan Weisz, Benjie Holson, Xiaohan Zhang, Vikas Sindhwani, Yao Lu, Fei Xia, Peng Xu, Tingnan Zhang, Jie Tan, Montserrat Gonzalez

This problem is challenging, as it requires planning wiping actions while reasoning over uncertain latent dynamics of crumbs and spills captured via high-dimensional visual observations.

reinforcement-learning Reinforcement Learning (RL)

PI-ARS: Accelerating Evolution-Learned Visual-Locomotion with Predictive Information Representations

no code implementations27 Jul 2022 Kuang-Huei Lee, Ofir Nachum, Tingnan Zhang, Sergio Guadarrama, Jie Tan, Wenhao Yu

Evolution Strategy (ES) algorithms have shown promising results in training complex robotic control policies due to their massive parallelism capability, simple implementation, effective parameter-space exploration, and fast training time.

Representation Learning

Learning Semantics-Aware Locomotion Skills from Human Demonstration

no code implementations27 Jun 2022 Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots

Using only 40 minutes of human demonstration data, our framework learns to adjust the speed and gait of the robot based on perceived terrain semantics, and enables the robot to walk over 6km without failure at close-to-optimal speed.

On the Robustness of Safe Reinforcement Learning under Observational Perturbations

1 code implementation29 May 2022 Zuxin Liu, Zijian Guo, Zhepeng Cen, huan zhang, Jie Tan, Bo Li, Ding Zhao

One interesting and counter-intuitive finding is that the maximum reward attack is strong, as it can both induce unsafe behaviors and make the attack stealthy by maintaining the reward.

Adversarial Attack reinforcement-learning +3

Safe Reinforcement Learning for Legged Locomotion

no code implementations5 Mar 2022 Tsung-Yen Yang, Tingnan Zhang, Linda Luu, Sehoon Ha, Jie Tan, Wenhao Yu

In this paper, we propose a safe reinforcement learning framework that switches between a safe recovery policy that prevents the robot from entering unsafe states, and a learner policy that is optimized to complete the task.

reinforcement-learning Reinforcement Learning +2

Physics-informed Evolutionary Strategy based Control for Mitigating Delayed Voltage Recovery

no code implementations29 Nov 2021 Yan Du, Qiuhua Huang, Renke Huang, Tianzhixi Yin, Jie Tan, Wenhao Yu, Xinya Li

Reinforcement learning methods have been developed for the same or similar challenging control problems, but they suffer from training inefficiency and lack of robustness for "corner or unseen" scenarios.

Fast and Efficient Locomotion via Learned Gait Transitions

1 code implementation9 Apr 2021 Yuxiang Yang, Tingnan Zhang, Erwin Coumans, Jie Tan, Byron Boots

We focus on the problem of developing energy efficient controllers for quadrupedal robots.

How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned

no code implementations4 Feb 2021 Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, Peter Pastor, Sergey Levine

Learning to perceive and move in the real world presents numerous challenges, some of which are easier to address than others, and some of which are often not considered in RL research that focuses only on simulated domains.

Deep Reinforcement Learning reinforcement-learning +1

Learning to be Safe: Deep RL with a Safety Critic

no code implementations27 Oct 2020 Krishnan Srinivasan, Benjamin Eysenbach, Sehoon Ha, Jie Tan, Chelsea Finn

Safety is an essential component for deploying reinforcement learning (RL) algorithms in real-world scenarios, and is critical during the learning process itself.

Reinforcement Learning (RL) Transfer Learning

Learning Agile Locomotion via Adversarial Training

no code implementations3 Aug 2020 Yujin Tang, Jie Tan, Tatsuya Harada

In contrast to prior works that used only one adversary, we find that training an ensemble of adversaries, each of which specializes in a different escaping strategy, is essential for the protagonist to master agility.

Reinforcement Learning (RL)

Learning Agile Robotic Locomotion Skills by Imitating Animals

no code implementations2 Apr 2020 Xue Bin Peng, Erwin Coumans, Tingnan Zhang, Tsang-Wei Lee, Jie Tan, Sergey Levine

In this work, we present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.

Domain Adaptation Imitation Learning +1

Rapidly Adaptable Legged Robots via Evolutionary Meta-Learning

no code implementations2 Mar 2020 Xingyou Song, Yuxiang Yang, Krzysztof Choromanski, Ken Caluwaerts, Wenbo Gao, Chelsea Finn, Jie Tan

Learning adaptable policies is crucial for robots to operate autonomously in our complex and quickly changing world.

Meta-Learning

Learning to Walk in the Real World with Minimal Human Effort

1 code implementation20 Feb 2020 Sehoon Ha, Peng Xu, Zhenyu Tan, Sergey Levine, Jie Tan

In this paper, we develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort.

Deep Reinforcement Learning Multi-Task Learning

Policies Modulating Trajectory Generators

3 code implementations7 Oct 2019 Atil Iscen, Ken Caluwaerts, Jie Tan, Tingnan Zhang, Erwin Coumans, Vikas Sindhwani, Vincent Vanhoucke

We propose an architecture for learning complex controllable behaviors by having simple Policies Modulate Trajectory Generators (PMTG), a powerful combination that can provide both memory and prior knowledge to the controller.

Deep Reinforcement Learning

Learning Fast Adaptation with Meta Strategy Optimization

1 code implementation28 Sep 2019 Wenhao Yu, Jie Tan, Yunfei Bai, Erwin Coumans, Sehoon Ha

The key idea behind MSO is to expose the same adaptation process, Strategy Optimization (SO), to both the training and testing phases.

Meta-Learning

Zero-shot Imitation Learning from Demonstrations for Legged Robot Visual Navigation

no code implementations27 Sep 2019 Xinlei Pan, Tingnan Zhang, Brian Ichter, Aleksandra Faust, Jie Tan, Sehoon Ha

Here, we propose a zero-shot imitation learning approach for training a visual navigation policy on legged robots from human (third-person perspective) demonstrations, enabling high-quality navigation and cost-effective data collection.

Disentanglement Imitation Learning +1

Data Efficient Reinforcement Learning for Legged Robots

no code implementations8 Jul 2019 Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Tingnan Zhang, Jie Tan, Vikas Sindhwani

We present a model-based framework for robot locomotion that achieves walking based on only 4. 5 minutes (45, 000 control steps) of data collected on a quadruped robot.

Model Predictive Control reinforcement-learning +3

NoRML: No-Reward Meta Learning

1 code implementation4 Mar 2019 Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Jie Tan, Chelsea Finn

To this end, we introduce a method that allows for self-adaptation of learned policies: No-Reward Meta Learning (NoRML).

Meta-Learning Reinforcement Learning +1

Learning to Walk via Deep Reinforcement Learning

no code implementations26 Dec 2018 Tuomas Haarnoja, Sehoon Ha, Aurick Zhou, Jie Tan, George Tucker, Sergey Levine

In this paper, we propose a sample-efficient deep RL algorithm based on maximum entropy RL that requires minimal per-task tuning and only a modest number of trials to learn neural network policies.

Deep Reinforcement Learning reinforcement-learning +1

Large-Scale Evolution of Image Classifiers

2 code implementations ICML 2017 Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin

Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone.

Evolutionary Algorithms Hyperparameter Optimization +3

Preparing for the Unknown: Learning a Universal Policy with Online System Identification

1 code implementation8 Feb 2017 Wenhao Yu, Jie Tan, C. Karen Liu, Greg Turk

Together, UP-OSI is a robust control policy that can be used across a wide range of dynamic models, and that is also responsive to sudden changes in the environment.

Friction

Cannot find the paper you are looking for? You can Submit a new open access paper.