Search Results for author: Fabio Pardo

Found 8 papers, 6 papers with code

CoMic: Co-Training and Mimicry for Reusable Skills

no code implementations ICML 2020 Leonard Hasenclever, Fabio Pardo, Raia Hadsell, Nicolas Heess, Josh Merel

Finally we show that it is possible to interleave the motion capture tracking with training on complementary tasks, enriching the resulting skill space, and enabling the reuse of skills not well covered by the motion capture data such as getting up from the ground or catching a ball.

Continuous Control reinforcement-learning

OstrichRL: A Musculoskeletal Ostrich Simulation to Study Bio-mechanical Locomotion

1 code implementation11 Dec 2021 Vittorio La Barbera, Fabio Pardo, Yuval Tassa, Monica Daley, Christopher Richards, Petar Kormushev, John Hutchinson

Along with this model, we also provide a set of reinforcement learning tasks, including reference motion tracking, running, and neck control, used to infer muscle actuation patterns.


Ivy: Templated Deep Learning for Inter-Framework Portability

1 code implementation4 Feb 2021 Daniel Lenton, Fabio Pardo, Fabian Falck, Stephen James, Ronald Clark

We introduce Ivy, a templated Deep Learning (DL) framework which abstracts existing DL frameworks.

Tonic: A Deep Reinforcement Learning Library for Fast Prototyping and Benchmarking

1 code implementation15 Nov 2020 Fabio Pardo

Deep reinforcement learning has been one of the fastest growing fields of machine learning over the past years and numerous libraries have been open sourced to support research.

Continuous Control OpenAI Gym +1

Goal-oriented Trajectories for Efficient Exploration

no code implementations5 Jul 2018 Fabio Pardo, Vitaly Levdik, Petar Kormushev

Exploration is a difficult challenge in reinforcement learning and even recent state-of-the art curiosity-based methods rely on the simple epsilon-greedy strategy to generate novelty.

Efficient Exploration reinforcement-learning

Action Branching Architectures for Deep Reinforcement Learning

5 code implementations24 Nov 2017 Arash Tavakoli, Fabio Pardo, Petar Kormushev

This approach achieves a linear increase of the number of network outputs with the number of degrees of freedom by allowing a level of independence for each individual action dimension.

Continuous Control General Reinforcement Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.