no code implementations • 3 Mar 2025 • Xirui Shi, Jun Jin
Inspired by classic robot motion generation methods such as DMPs and ProMPs, which capture temporally and spatially consistent dynamic of trajectories using low-dimensional vectors -- and by recent advances in diffusion-based image generation that use consistency models with probability flow ODEs to accelerate the denoising process, we propose Fast Robot Motion Diffusion (FRMD).
no code implementations • 31 Dec 2023 • Qianxi Li, Yingyue Cao, Jikun Kang, Tianpei Yang, Xi Chen, Jun Jin, Matthew E. Taylor
Fine-tuning Large Language Models (LLMs) adapts a trained model to specific downstream tasks, significantly improving task-specific performance.
no code implementations • NeurIPS 2023 • Yao Mu, Qinglong Zhang, Mengkang Hu, Wenhai Wang, Mingyu Ding, Jun Jin, Bin Wang, Jifeng Dai, Yu Qiao, Ping Luo
In this work, we introduce EmbodiedGPT, an end-to-end multi-modal foundation model for embodied AI, empowering embodied agents with multi-modal understanding and execution capabilities.
1 code implementation • ICLR 2023 • Hongming Zhang, Chenjun Xiao, Han Wang, Jun Jin, Bo Xu, Martin Müller
In this work, we further exploit the information in the replay memory by treating it as an empirical \emph{Replay Memory MDP (RM-MDP)}.
1 code implementation • 16 Dec 2022 • Zichen Zhang, Jun Jin, Martin Jagersand, Jun Luo, Dale Schuurmans
To tackle this issue, we propose Decentralized CEM (DecentCEM), a simple but effective improvement over classical CEM, by using an ensemble of CEM instances running independently from one another, and each performing a local improvement of its own sampling distribution.
1 code implementation • 6 Dec 2022 • Amirmohammad Karimi, Jun Jin, Jun Luo, A. Rupam Mahmood, Martin Jagersand, Samuele Tosatto
In classic reinforcement learning algorithms, agents make decisions at discrete and fixed time intervals.
no code implementations • 28 Nov 2022 • Mohammad Hossain, Derssie Mebratu, Niranjan Hasabnis, Jun Jin, Gaurav Chaudhary, Noah Shen
To address this problem of realizing the full potential of the underlying platform, we develop a machine learning based technique to characterize, profile and predict workloads running in the cloud environment.
no code implementations • 13 Nov 2022 • Jun Jin, Hongming Zhang, Jun Luo
This paper tackles the problem of how to pre-train a model and make it generally reusable backbones for downstream task learning.
no code implementations • 25 Oct 2022 • Banafsheh Rafiee, Sina Ghiassian, Jun Jin, Richard Sutton, Jun Luo, Adam White
In this paper, we explore an approach to auxiliary task discovery in reinforcement learning based on ideas from representation learning.
no code implementations • 1 Apr 2022 • Banafsheh Rafiee, Jun Jin, Jun Luo, Adam White
Our focus on the role of the target policy of the auxiliary tasks is motivated by the fact that the target policy determines the behavior about which the agent wants to make a prediction and the state-action distribution that the agent is trained on, which further affects the main task learning.
no code implementations • 28 Feb 2022 • Jun Jin, Martin Jagersand
We study the problem of generalizable task learning from human demonstration videos without extra training on the robot or pre-recorded robot motions.
no code implementations • 29 Sep 2021 • Zichen Zhang, Jun Jin, Martin Jagersand, Jun Luo, Dale Schuurmans
Further, we extend the decentralized approach to sequential decision-making problems where we show in 13 continuous control benchmark environments that it matches or outperforms the state-of-the-art CEM algorithms in most cases, under the same budget of the total number of samples for planning.
no code implementations • 29 Dec 2020 • Daniel Graves, Jun Jin, Jun Luo
Our approach facilitates the learning of new policies by (1) maximizing the target MDP reward with the help of the black-box option, and (2) returning the agent to states in the learned initiation set of the black-box option where it is already optimal.
no code implementations • 11 Nov 2020 • Jun Jin, Daniel Graves, Cameron Haigh, Jun Luo, Martin Jagersand
We consider real-world reinforcement learning (RL) of robotic manipulation tasks that involve both visuomotor skills and contact-rich skills.
no code implementations • 26 Jun 2020 • Daniel Graves, Nhat M. Nguyen, Kimia Hassanzadeh, Jun Jin
Reinforcement learning using a novel predictive representation is applied to autonomous driving to accomplish the task of driving between lane markings where substantial benefits in performance and generalization are observed on unseen test roads in both simulation and on a real Jackal robot.
no code implementations • 5 Mar 2020 • Jun Jin, Laura Petrich, Masood Dehghan, Martin Jagersand
We consider the problem of visual imitation learning without human supervision (e. g. kinesthetic teaching or teleoperation), nor access to an interactive reinforcement learning (RL) training environment.
no code implementations • 28 Nov 2019 • Jun Jin, Chao Ying, Zhou Yu
The principal support vector machines method (Li et al., 2011) is a powerful tool for sufficient dimension reduction that replaces original predictors with their low-dimensional linear combinations without loss of information.
no code implementations • 8 Nov 2019 • Jun Jin, Nhat M. Nguyen, Nazmus Sakib, Daniel Graves, Hengshuai Yao, Martin Jagersand
We observe that our method demonstrates time-efficient path planning behavior with high success rate in mapless navigation tasks.
Robotics
1 code implementation • 29 Sep 2018 • Jun Jin, Laura Petrich, Masood Dehghan, Zichen Zhang, Martin Jagersand
Our proposed method can directly learn from raw videos, which removes the need for hand-engineered task specification.
Robotics