Search Results for author: Jun Yamada

Found 7 papers, 1 papers with code

D-Cubed: Latent Diffusion Trajectory Optimisation for Dexterous Deformable Manipulation

no code implementations19 Mar 2024 Jun Yamada, Shaohong Zhong, Jack Collins, Ingmar Posner

In this work, we propose D-Cubed, a novel trajectory optimisation method using a latent diffusion model (LDM) trained from a task-agnostic play dataset to solve dexterous deformable object manipulation tasks.

Deformable Object Manipulation

World Models via Policy-Guided Trajectory Diffusion

1 code implementation13 Dec 2023 Marc Rigter, Jun Yamada, Ingmar Posner

Our results demonstrate that PolyGRAD outperforms state-of-the-art baselines in terms of trajectory prediction error for short trajectories, with the exception of autoregressive diffusion.

Continuous Control Denoising +2

TWIST: Teacher-Student World Model Distillation for Efficient Sim-to-Real Transfer

no code implementations7 Nov 2023 Jun Yamada, Marc Rigter, Jack Collins, Ingmar Posner

The teacher world model then supervises a student world model that takes the domain-randomised image observations as input.

Leveraging Scene Embeddings for Gradient-Based Motion Planning in Latent Space

no code implementations6 Mar 2023 Jun Yamada, Chia-Man Hung, Jack Collins, Ioannis Havoutis, Ingmar Posner

Motion planning framed as optimisation in structured latent spaces has recently emerged as competitive with traditional methods in terms of planning success while significantly outperforming them in terms of computational speed.

Motion Planning

Efficient Skill Acquisition for Complex Manipulation Tasks in Obstructed Environments

no code implementations6 Mar 2023 Jun Yamada, Jack Collins, Ingmar Posner

In this work, we propose a system for efficient skill acquisition that leverages an object-centric generative model (OCGM) for versatile goal identification to specify a goal for MP combined with RL to solve complex manipulation tasks in obstructed environments.

Motion Planning Object +1

Task-Induced Representation Learning

no code implementations ICLR 2022 Jun Yamada, Karl Pertsch, Anisha Gunjal, Joseph J. Lim

We investigate the effectiveness of unsupervised and task-induced representation learning approaches on four visually complex environments, from Distracting DMControl to the CARLA driving simulator.

Contrastive Learning Imitation Learning +2

Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments

no code implementations22 Oct 2020 Jun Yamada, Youngwoon Lee, Gautam Salhotra, Karl Pertsch, Max Pflueger, Gaurav S. Sukhatme, Joseph J. Lim, Peter Englert

In contrast, motion planners use explicit models of the agent and environment to plan collision-free paths to faraway goals, but suffer from inaccurate models in tasks that require contacts with the environment.

reinforcement-learning Reinforcement Learning (RL) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.