Humanoid Control
10 papers with code • 0 benchmarks • 0 datasets
Control of a high-dimensional humanoid. This can include skill learning by tracking motion capture clips, learning goal-directed tasks like going towards a moving target, and generating motion within a physics simulator.
Benchmarks
These leaderboards are used to track progress in Humanoid Control
Most implemented papers
Optical Non-Line-of-Sight Physics-based 3D Human Pose Estimation
We bring together a diverse set of technologies from NLOS imaging, human pose estimation and deep reinforcement learning to construct an end-to-end data processing pipeline that converts a raw stream of photon measurements into a full 3D human pose sequence estimate.
Residual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis
Our approach is the first humanoid control method that successfully learns from a large-scale human motion dataset (Human3. 6M) and generates diverse long-term motions.
ClipUp: A Simple and Powerful Optimizer for Distribution-based Policy Evolution
In these algorithms, gradients of the total reward with respect to the policy parameters are estimated using a population of solutions drawn from a search distribution, and then used for policy optimization with stochastic gradient ascent.
On the model-based stochastic value gradient for continuous reinforcement learning
For over a decade, model-based reinforcement learning has been seen as a way to leverage control-based domain knowledge to improve the sample-efficiency of reinforcement learning agents.
Learning to Brachiate via Simplified Model Imitation
Key to our method is the use of a simplified model, a point mass with a virtual arm, for which we first learn a policy that can brachiate across handhold sequences with a prescribed order.
Reincarnating Reinforcement Learning: Reusing Prior Computation to Accelerate Progress
To address these issues, we present reincarnating RL as an alternative workflow or class of problem settings, where prior computational work (e. g., learned policies) is reused or transferred between design iterations of an RL agent, or from one RL agent to another.
Learning Soccer Juggling Skills with Layer-wise Mixture-of-Experts
Learning physics-based character controllers that can successfully integrate diverse motor skills using a single policy remains a challenging problem.
MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control
We demonstrate the utility of MoCapAct by using it to train a single hierarchical policy capable of tracking the entire MoCap dataset within dm_control and show the learned low-level component can be re-used to efficiently learn downstream high-level tasks.
MuJoCo MPC for Humanoid Control: Evaluation on HumanoidBench
We tackle the recently introduced benchmark for whole-body humanoid control HumanoidBench using MuJoCo MPC.
Reinforcement learning-based motion imitation for physiologically plausible musculoskeletal motor control
In this work, we present a model-free motion imitation framework (KINESIS) to advance the understanding of muscle-based motor control.