Search Results for author: Zihao Liang

Found 5 papers, 0 papers with code

Adaptive Policy Learning to Additional Tasks

no code implementations24 May 2023 Wenjian Hao, Zehui Lu, Zihao Liang, Tianyu Zhou, Shaoshuai Mou

This paper develops a policy learning method for tuning a pre-trained policy to adapt to additional tasks without altering the original task.

Policy Gradient Methods

Policy Learning based on Deep Koopman Representation

no code implementations24 May 2023 Wenjian Hao, Paulo C. Heredia, Bowen Huang, Zehui Lu, Zihao Liang, Shaoshuai Mou

This paper proposes a policy learning algorithm based on the Koopman operator theory and policy gradient approach, which seeks to approximate an unknown dynamical system and search for optimal policy simultaneously, using the observations gathered through interaction with the environment.

A Data-Driven Approach for Inverse Optimal Control

no code implementations31 Mar 2023 Zihao Liang, Wenjian Hao, Shaoshuai Mou

By assuming the objective function to be learned is parameterized as a linear combination of features with unknown weights, the proposed approach for IOC is able to achieve a Koopman representation of the unknown dynamics and the unknown weights in objective function together.

GPPF: A General Perception Pre-training Framework via Sparsely Activated Multi-Task Learning

no code implementations3 Aug 2022 Benyuan Sun, Jin Dai, Zihao Liang, Congying Liu, Yi Yang, Bo Bai

SIMT lays the foundation of pre-training with large-scale multi-task multi-domain datasets and is proved essential for stable training in our GPPF experiments.

Multi-Task Learning

Learning Objective Functions Incrementally by Inverse Optimal Control

no code implementations28 Oct 2020 Wanxin Jin, Zihao Liang, Shaoshuai Mou

This paper proposes an inverse optimal control method which enables a robot to incrementally learn a control objective function from a collection of trajectory segments.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.