no code implementations • 22 Mar 2024 • Nutan Chen, Elie Aljalbout, Botond Cseke, Patrick van der Smagt
This integration facilitates rapid adaptation to new tasks and optimizes the utilization of accumulated expertise by allowing robots to learn and generalize from demonstrated trajectories.
no code implementations • 6 Dec 2023 • Elie Aljalbout, Felix Frank, Maximilian Karl, Patrick van der Smagt
Our findings have important implications for the design of RL algorithms for robot manipulation tasks, and highlight the need for careful consideration of action spaces when training and transferring RL agents for real-world robotics.
no code implementations • 28 Nov 2022 • Elie Aljalbout, Maximilian Karl, Patrick van der Smagt
Multi-robot manipulation tasks involve various control entities that can be separated into dynamically independent parts.
no code implementations • 19 Oct 2021 • Maximilian Ulmer, Elie Aljalbout, Sascha Schwarz, Sami Haddadin
We combine these components through a bio-inspired action space that we call AFORCE.
no code implementations • 15 Oct 2021 • Elie Aljalbout
This is due to the additional challenges encountered in the real-world, such as noisy sensors and actuators, safe exploration, non-stationary dynamics, autonomous environment resetting as well as the cost of running experiments for long periods of time.
no code implementations • 8 Oct 2021 • Marvin Alles, Elie Aljalbout
Hence, to avoid modeling the interaction between the two robots and the used assembly tools, we present a modular approach with two decentralized single-arm controllers which are coupled using a single centralized learned policy.
no code implementations • 2 Oct 2021 • Elie Aljalbout, Maximilian Ulmer, Rudolph Triebel
Our method enhances the exploration capability of RL algorithms, by taking advantage of the SRL setup.
no code implementations • 28 Sep 2021 • Elie Aljalbout, Maximilian Ulmer, Rudolph Triebel
Our method enhances the exploration capability of the RL algorithms by taking advantage of the SRL setup.
no code implementations • 30 Oct 2020 • Elie Aljalbout, Ji Chen, Konstantin Ritt, Maximilian Ulmer, Sami Haddadin
In this paper, we address the problem of vision-based obstacle avoidance for robotic manipulators.
1 code implementation • 25 Oct 2020 • Nirnai Rao, Elie Aljalbout, Axel Sauer, Sami Haddadin
Additionally, techniques from supervised learning are often used by default but influence the algorithms in a reinforcement learning setting in different and not well-understood ways.
no code implementations • 17 Mar 2020 • Elie Aljalbout, Florian Walter, Florian Röhrbein, Alois Knoll
This model is the main focus of this work, as its contribution is not limited to engineering but also applicable to neuroscience.
2 code implementations • 21 Jul 2019 • Axel Sauer, Elie Aljalbout, Sami Haddadin
The framework leverages the idea of obtaining additional object templates during the tracking process.
Ranked #3 on Visual Object Tracking on VOT2017/18
2 code implementations • 23 Jan 2018 • Elie Aljalbout, Vladimir Golkov, Yawar Siddiqui, Maximilian Strobel, Daniel Cremers
In this paper, we propose a systematic taxonomy of clustering methods that utilize deep neural networks.