no code implementations • 27 Feb 2025 • Maria Krinner, Elie Aljalbout, Angel Romero, Davide Scaramuzza
However, training a world model alongside the policy increases the computational complexity, leading to longer training times that are often intractable for complex real-world scenarios.
Model-based Reinforcement Learning
Reinforcement Learning (RL)
+1
no code implementations • 17 Dec 2024 • Jiaxu Xing, Ismail Geles, Yunlong Song, Elie Aljalbout, Davide Scaramuzza
Reinforcement learning (RL) has shown great effectiveness in quadrotor control, enabling specialized policies to develop even human-champion-level performance in single-task scenarios.
1 code implementation • 15 Dec 2024 • Mariam Hassan, Sebastian Stapf, Ahmad Rahimi, Pedro M B Rezende, Yasaman Haghighi, David Brüggemann, Isinsu Katircioglu, Lin Zhang, Xiaoran Chen, Suman Saha, Marco Cannici, Elie Aljalbout, Botao Ye, Xi Wang, Aram Davtyan, Mathieu Salzmann, Davide Scaramuzza, Marc Pollefeys, Paolo Favaro, Alexandre Alahi
We present GEM, a Generalizable Ego-vision Multimodal world model that predicts future frames using a reference frame, sparse features, human poses, and ego-trajectories.
no code implementations • 12 Dec 2024 • Nico Messikommer, Jiaxu Xing, Elie Aljalbout, Davide Scaramuzza
In this framework, a teacher is trained with privileged task information, while a student tries to predict the actions of the teacher with more limited observations, e. g., in a robot navigation task, the teacher might have access to distances to nearby obstacles, while the student only receives visual observations of the scene.
no code implementations • 18 Jul 2024 • Elie Aljalbout, Nikolaos Sotirakis, Patrick van der Smagt, Maximilian Karl, Nutan Chen
Our results highlight the benefits of using language-driven task representations for world models and a clear advantage of model-based multi-task learning over the more common model-free paradigm.
no code implementations • 3 Jul 2024 • Elie Aljalbout, Felix Frank, Patrick van der Smagt, Alexandros Paraschos
Robotic manipulation requires accurate motion and physical interaction control.
no code implementations • 22 Mar 2024 • Nutan Chen, Botond Cseke, Elie Aljalbout, Alexandros Paraschos, Marvin Alles, Patrick van der Smagt
We present a novel motion generation approach for robot arms, with high degrees of freedom, in complex settings that can adapt online to obstacles or new via points.
no code implementations • 6 Dec 2023 • Elie Aljalbout, Felix Frank, Maximilian Karl, Patrick van der Smagt
We study the choice of action space in robot manipulation learning and sim-to-real transfer.
no code implementations • 28 Nov 2022 • Elie Aljalbout, Maximilian Karl, Patrick van der Smagt
Multi-robot manipulation tasks involve various control entities that can be separated into dynamically independent parts.
no code implementations • 19 Oct 2021 • Maximilian Ulmer, Elie Aljalbout, Sascha Schwarz, Sami Haddadin
We combine these components through a bio-inspired action space that we call AFORCE.
no code implementations • 15 Oct 2021 • Elie Aljalbout
This is due to the additional challenges encountered in the real-world, such as noisy sensors and actuators, safe exploration, non-stationary dynamics, autonomous environment resetting as well as the cost of running experiments for long periods of time.
no code implementations • 8 Oct 2021 • Marvin Alles, Elie Aljalbout
Hence, to avoid modeling the interaction between the two robots and the used assembly tools, we present a modular approach with two decentralized single-arm controllers which are coupled using a single centralized learned policy.
no code implementations • 2 Oct 2021 • Elie Aljalbout, Maximilian Ulmer, Rudolph Triebel
Our method enhances the exploration capability of RL algorithms, by taking advantage of the SRL setup.
no code implementations • 28 Sep 2021 • Elie Aljalbout, Maximilian Ulmer, Rudolph Triebel
Our method enhances the exploration capability of the RL algorithms by taking advantage of the SRL setup.
no code implementations • 30 Oct 2020 • Elie Aljalbout, Ji Chen, Konstantin Ritt, Maximilian Ulmer, Sami Haddadin
In this paper, we address the problem of vision-based obstacle avoidance for robotic manipulators.
1 code implementation • 25 Oct 2020 • Nirnai Rao, Elie Aljalbout, Axel Sauer, Sami Haddadin
Additionally, techniques from supervised learning are often used by default but influence the algorithms in a reinforcement learning setting in different and not well-understood ways.
no code implementations • 17 Mar 2020 • Elie Aljalbout, Florian Walter, Florian Röhrbein, Alois Knoll
This model is the main focus of this work, as its contribution is not limited to engineering but also applicable to neuroscience.
2 code implementations • 21 Jul 2019 • Axel Sauer, Elie Aljalbout, Sami Haddadin
The framework leverages the idea of obtaining additional object templates during the tracking process.
Ranked #3 on
Visual Object Tracking
on VOT2017/18
2 code implementations • 23 Jan 2018 • Elie Aljalbout, Vladimir Golkov, Yawar Siddiqui, Maximilian Strobel, Daniel Cremers
In this paper, we propose a systematic taxonomy of clustering methods that utilize deep neural networks.