Search Results for author: Justus Piater

Found 19 papers, 4 papers with code

Effect of Optimizer, Initializer, and Architecture of Hypernetworks on Continual Learning from Demonstration

1 code implementation31 Dec 2023 Sayantan Auddy, Sebastian Bergner, Justus Piater

In this paper, we perform an exploratory study of the effects of different optimizers, initializers, and network architectures on the continual learning performance of hypernetworks for CLfD.

Continual Learning

Unified Task and Motion Planning using Object-centric Abstractions of Motion Constraints

no code implementations29 Dec 2023 Alejandro Agostini, Justus Piater

In task and motion planning (TAMP), the ambiguity and underdetermination of abstract descriptions used by task planning methods make it difficult to characterize physical constraints needed to successfully execute a task.

Computational Efficiency Motion Planning +1

Colored Noise in PPO: Improved Exploration and Performance Through Correlated Action Sampling

no code implementations18 Dec 2023 Jakob Hollenstein, Georg Martius, Justus Piater

Proximal Policy Optimization (PPO), a popular on-policy deep reinforcement learning method, employs a stochastic policy for exploration.

reinforcement-learning

Regularity as Intrinsic Reward for Free Play

no code implementations NeurIPS 2023 Cansu Sancaktar, Justus Piater, Georg Martius

Our generalized formulation of Regularity as Intrinsic Reward (RaIR) allows us to operationalize it within model-based reinforcement learning.

Model-based Reinforcement Learning reinforcement-learning

Constrained Equation Learner Networks for Precision-Preserving Extrapolation of Robotic Skills

no code implementations4 Nov 2023 Hector Perez-Villeda, Justus Piater, Matteo Saveriano

While conventional approaches for constrained regression use one kind of basis function, e. g., Gaussian, we exploit Equation Learner Networks to learn a set of analytical expressions and use them as basis functions.

Improving the Trainability of Deep Neural Networks through Layerwise Batch-Entropy Regularization

2 code implementations1 Aug 2022 David Peer, Bart Keulen, Sebastian Stabinger, Justus Piater, Antonio Rodríguez-Sánchez

We show empirically that we can therefore train a "vanilla" fully connected network and convolutional neural network -- no skip connections, batch normalization, dropout, or any other architectural tweak -- with 500 layers by simply adding the batch-entropy regularization term to the loss function.

Action Noise in Off-Policy Deep Reinforcement Learning: Impact on Exploration and Performance

no code implementations8 Jun 2022 Jakob Hollenstein, Sayantan Auddy, Matteo Saveriano, Erwan Renaudo, Justus Piater

Many Deep Reinforcement Learning (D-RL) algorithms rely on simple forms of exploration such as the additive action noise often used in continuous control domains.

Continuous Control reinforcement-learning +1

Continual Learning from Demonstration of Robotics Skills

1 code implementation14 Feb 2022 Sayantan Auddy, Jakob Hollenstein, Matteo Saveriano, Antonio Rodríguez-Sánchez, Justus Piater

We empirically demonstrate the effectiveness of this approach in remembering long sequences of trajectory learning tasks without the need to store any data from past demonstrations.

Continual Learning

DeepSym: Deep Symbol Generation and Rule Learning from Unsupervised Continuous Robot Interaction for Planning

1 code implementation4 Dec 2020 Alper Ahmetoglu, M. Yunus Seker, Justus Piater, Erhan Oztop, Emre Ugur

We propose a novel general method that finds action-grounded, discrete object and effect categories and builds probabilistic rules over them for non-trivial action planning.

Object

Evaluating the Progress of Deep Learning for Visual Relational Concepts

no code implementations29 Jan 2020 Sebastian Stabinger, Peer David, Justus Piater, Antonio Rodríguez-Sánchez

Convolutional Neural Networks (CNNs) have become the state of the art method for image classification in the last ten years.

Classification General Classification +2

Symbol Emergence in Cognitive Developmental Systems: a Survey

no code implementations26 Jan 2018 Tadahiro Taniguchi, Emre Ugur, Matej Hoffmann, Lorenzo Jamone, Takayuki Nagai, Benjamin Rosman, Toshihiko Matsuka, Naoto Iwahashi, Erhan Oztop, Justus Piater, Florentin Wörgötter

However, the symbol grounding problem was originally posed to connect symbolic AI and sensorimotor information and did not consider many interdisciplinary phenomena in human communication and dynamic symbol systems in our society, which semiotics considered.

25 years of CNNs: Can we compare to human abstraction capabilities?

no code implementations28 Jul 2016 Sebastian Stabinger, Antonio Rodríguez-Sánchez, Justus Piater

We try to determine the progress made by convolutional neural networks over the past 25 years in classifying images into abstractc lasses.

Learning Abstract Classes using Deep Learning

no code implementations17 Jun 2016 Sebastian Stabinger, Antonio Rodriguez-Sanchez, Justus Piater

Humans are generally good at learning abstract concepts about objects and scenes (e. g.\ spatial orientation, relative sizes, etc.).

Proceedings of the 37th Annual Workshop of the Austrian Association for Pattern Recognition (ÖAGM/AAPR), 2013

no code implementations6 Apr 2013 Justus Piater, Antonio Rodríguez-Sánchez

This volume represents the proceedings of the 37th Annual Workshop of the Austrian Association for Pattern Recognition (\"OAGM/AAPR), held May 23-24, 2013, in Innsbruck, Austria.

RWTH-PHOENIX-Weather: A Large Vocabulary Sign Language Recognition and Translation Corpus

no code implementations LREC 2012 Jens Forster, Christoph Schmidt, Thomas Hoyoux, Oscar Koller, Uwe Zelle, Justus Piater, Hermann Ney

This paper introduces the RWTH-PHOENIX-Weather corpus, a video-based, large vocabulary corpus of German Sign Language suitable for statistical sign language recognition and translation.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

Cannot find the paper you are looking for? You can Submit a new open access paper.