1 code implementation • 4 Oct 2024 • Guy Tevet, Sigal Raab, Setareh Cohan, Daniele Reda, Zhengyi Luo, Xue Bin Peng, Amit H. Bermano, Michiel Van de Panne
The former is capable of generating a wide variety of motions, adhering to intuitive control such as text, while the latter offers physically plausible motion and direct interaction with the environment.
no code implementations • 17 May 2024 • Setareh Cohan, Guy Tevet, Daniele Reda, Xue Bin Peng, Michiel Van de Panne
To this end, we propose Conditional Motion Diffusion In-betweening (CondMDI) which allows for arbitrary dense-or-sparse keyframe placement and partial keyframe constraints while generating high-quality motions that are diverse and coherent with the given keyframes.
no code implementations • 4 Jul 2023 • Daniele Reda, Jungdam Won, Yuting Ye, Michiel Van de Panne, Alexander Winkler
We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.
1 code implementation • 24 Oct 2022 • Setareh Cohan, Nam Hee Kim, David Rolnick, Michiel Van de Panne
Empirically, we find that the region density increases only moderately throughout training, as measured along fixed trajectories coming from the final policy.
1 code implementation • SIGGRAPH 2022 • Zhaoming Xie, Sebastian Starke, Hung Yu Ling, Michiel Van de Panne
Learning physics-based character controllers that can successfully integrate diverse motor skills using a single policy remains a challenging problem.
1 code implementation • 8 May 2022 • Daniele Reda, Hung Yu Ling, Michiel Van de Panne
Key to our method is the use of a simplified model, a point mass with a virtual arm, for which we first learn a policy that can brachiate across handhold sequences with a prescribed order.
1 code implementation • 30 Apr 2022 • Tianxin Tao, Matthew Wilson, Ruiyu Gou, Michiel Van de Panne
Finally, a third stage learns control policies that can reproduce the weaker get-up motions at much slower speeds.
no code implementations • 11 Apr 2022 • Tianxin Tao, Daniele Reda, Michiel Van de Panne
Vision Transformers (ViT) have recently demonstrated the significant potential of transformer architectures for computer vision.
no code implementations • 7 Mar 2022 • Ariel Kwiatkowski, Eduardo Alvarado, Vicky Kalogeiton, C. Karen Liu, Julien Pettré, Michiel Van de Panne, Marie-Paule Cani
Reinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions, and achieve a particular goal within an arbitrary environment.
no code implementations • CVPR 2022 • Tianxin Tao, Xiaohang Zhan, Zhongquan Chen, Michiel Van de Panne
Motion style transfer is a common method for enriching character animation.
no code implementations • 6 Feb 2022 • Michael Teng, Michiel Van de Panne, Frank Wood
Distributional reinforcement learning (RL) aims to learn a value-network that predicts the full distribution of the returns for a given state, often modeled via a quantile-based critic.
no code implementations • 28 Oct 2021 • Nicholas Roy, Ingmar Posner, Tim Barfoot, Philippe Beaudoin, Yoshua Bengio, Jeannette Bohg, Oliver Brock, Isabelle Depatie, Dieter Fox, Dan Koditschek, Tomas Lozano-Perez, Vikash Mansinghka, Christopher Pal, Blake Richards, Dorsa Sadigh, Stefan Schaal, Gaurav Sukhatme, Denis Therien, Marc Toussaint, Michiel Van de Panne
Machine learning has long since become a keystone technology, accelerating science and applications in a broad range of domains.
no code implementations • 2 May 2021 • Zhiqi Yin, Zeshi Yang, Michiel Van de Panne, KangKang Yin
We present a framework that enables the discovery of diverse and natural-looking motion strategies for athletic skills such as the high jump.
no code implementations • 20 Apr 2021 • Zhaoming Xie, Xingye Da, Buck Babich, Animesh Garg, Michiel Van de Panne
Model-free reinforcement learning (RL) for legged locomotion commonly relies on a physics simulator that can accurately predict the behaviors of every degree of freedom of the robot.
1 code implementation • 26 Mar 2021 • Hung Yu Ling, Fabio Zinno, George Cheng, Michiel Van de Panne
A fundamental problem in computer animation is that of realizing purposeful and realistic human movement given a sufficiently-rich set of motion capture clips.
no code implementations • 9 Oct 2020 • Daniele Reda, Tianxin Tao, Michiel Van de Panne
Learning to locomote is one of the most common tasks in physics-based animation and deep reinforcement learning (RL).
1 code implementation • 22 Sep 2020 • Amin Babadi, Michiel Van de Panne, C. Karen Liu, Perttu Hämäläinen
We propose a novel method for exploring the dynamics of physically based animated characters, and learning a task-agnostic action space that makes movement optimization easier.
1 code implementation • 9 May 2020 • Zhaoming Xie, Hung Yu Ling, Nam Hee Kim, Michiel Van de Panne
Humans are highly adept at walking in environments with foot placement constraints, including stepping-stone scenarios where the footstep locations are fully constrained.
no code implementations • L4DC 2020 • Nam Hee Kim, Zhaoming Xie, Michiel Van de Panne
Many dynamical systems exhibit similar structure, as often captured by hand-designed simplified models that can be used for analysis and control.
1 code implementation • Proceedings of ACM SIGGRAPH Motion, Interaction, and Games (MIG 2019) 2019 • Farzad Abdolhosseini, Hung Yu Ling, Zhaoming Xie, Xue Bin Peng, Michiel Van de Panne
We describe, compare, and evaluate four practical methods for encouraging motion symmetry.
1 code implementation • 22 Mar 2019 • Zhaoming Xie, Patrick Clary, Jeremy Dao, Pedro Morais, Jonathan Hurst, Michiel Van de Panne
Deep reinforcement learning (DRL) is a promising approach for developing legged locomotion skills.
Robotics
1 code implementation • 17 Apr 2018 • Glen Berseth, Xue Bin Peng, Michiel Van de Panne
We provide $89$ challenging simulation environments that range in difficulty.
6 code implementations • 8 Apr 2018 • Xue Bin Peng, Pieter Abbeel, Sergey Levine, Michiel Van de Panne
We further explore a number of methods for integrating multiple clips into the learning process to develop multi-skilled agents capable of performing a rich repertoire of diverse skills.
3 code implementations • 15 Mar 2018 • Zhaoming Xie, Glen Berseth, Patrick Clary, Jonathan Hurst, Michiel Van de Panne
By formulating a feedback control problem as finding the optimal policy for a Markov Decision Process, we are able to learn robust walking controllers that imitate a reference motion with DRL.
Robotics
no code implementations • ICLR 2018 • Glen Berseth, Cheng Xie, Paul Cernek, Michiel Van de Panne
Deep reinforcement learning has demonstrated increasing capabilities for continuous control problems, including agents that can move with skill and agility through their environment.
no code implementations • 11 Jan 2018 • Glen Berseth, Michiel Van de Panne
Deep reinforcement learning has achieved great strides in solving challenging motion control tasks.
no code implementations • 3 Nov 2016 • Xue Bin Peng, Michiel Van de Panne
The use of deep reinforcement learning allows for high-dimensional state descriptors, but little is known about how the choice of action representation impacts the learning difficulty and the resulting performance.