Search Results for author: Marc Toussaint

Found 37 papers, 10 papers with code

Newton methods for k-order Markov Constrained Motion Problems

1 code implementation1 Jul 2014 Marc Toussaint

This is a documentation of a framework for robot motion optimization that aims to draw on classical constrained optimization methods.

Robotics

Probabilistic Recurrent State-Space Models

4 code implementations ICML 2018 Andreas Doerr, Christian Daniel, Martin Schiegg, Duy Nguyen-Tuong, Stefan Schaal, Marc Toussaint, Sebastian Trimpe

State-space models (SSMs) are a highly expressive model class for learning patterns in time series data and for system identification.

Gaussian Processes Time Series +2

Motion Planning Explorer: Visualizing Local Minima using a Local-Minima Tree

2 code implementations11 Sep 2019 Andreas Orthey, Benjamin Frész, Marc Toussaint

Those minima are important to visualize to let a user guide, prevent or predict motions.

Motion Planning

Multilevel Motion Planning: A Fiber Bundle Formulation

1 code implementation18 Jul 2020 Andreas Orthey, Sohaib Akbar, Marc Toussaint

Those methods exploit the structure of fiber bundles through the use of bundle primitives.

Robotics

MoGaze: A Dataset of Full-Body Motions that Includes Workspace Geometry and Eye-Gaze

1 code implementation23 Nov 2020 Philipp Kratzer, Simon Bihlmaier, Niteesh Balachandra Midlagajni, Rohit Prakash, Marc Toussaint, Jim Mainprice

Hence, in this paper, we present a novel dataset of full-body motion for everyday manipulation tasks, which includes the above.

Robotics

Trajectory-Based Off-Policy Deep Reinforcement Learning

2 code implementations14 May 2019 Andreas Doerr, Michael Volpp, Marc Toussaint, Sebastian Trimpe, Christian Daniel

Policy gradient methods are powerful reinforcement learning algorithms and have been demonstrated to solve many complex tasks.

Continuous Control Policy Gradient Methods +3

Describing Physics For Physical Reasoning: Force-based Sequential Manipulation Planning

1 code implementation28 Feb 2020 Marc Toussaint, Jung-Su Ha, Danny Driess

Physical reasoning is a core aspect of intelligence in animals and humans.

Robotics

Kinematic Morphing Networks for Manipulation Skill Transfer

no code implementations5 Mar 2018 Peter Englert, Marc Toussaint

The transfer of a robot skill between different geometric environments is non-trivial since a wide variety of environments exists, sensor observations as well as robot motions are high-dimensional, and the environment might only be partially observed.

Physical problem solving: Joint planning with symbolic, geometric, and dynamic constraints

no code implementations25 Jul 2017 Ilker Yildirim, Tobias Gerstenberg, Basil Saeed, Marc Toussaint, Josh Tenenbaum

In Experiment~2, we asked participants online to judge whether they think the person in the lab used one or two hands.

Identification of Unmodeled Objects from Symbolic Descriptions

no code implementations23 Jan 2017 Andrea Baisero, Stefan Otte, Peter Englert, Marc Toussaint

Successful human-robot cooperation hinges on each agent's ability to process and exchange information about the shared environment and the task at hand.

Ensemble Learning Object

Advancing Bayesian Optimization: The Mixed-Global-Local (MGL) Kernel and Length-Scale Cool Down

no code implementations9 Dec 2016 Kim Peter Wabersich, Marc Toussaint

Bayesian Optimization (BO) has become a core method for solving expensive black-box optimization problems.

Bayesian Optimization

The Advantage of Cross Entropy over Entropy in Iterative Information Gathering

no code implementations26 Sep 2014 Johannes Kulick, Robert Lieck, Marc Toussaint

Gathering the most information by picking the least amount of data is a common task in experimental design or when exploring an unknown environment in reinforcement learning and robotics.

Experimental Design

Exploration in Model-based Reinforcement Learning by Empirically Estimating Learning Progress

no code implementations NeurIPS 2012 Manuel Lopes, Tobias Lang, Marc Toussaint, Pierre-Yves Oudeyer

Formal exploration approaches in model-based reinforcement learning estimate the accuracy of the currently learned model without consideration of the empirical prediction error.

Model-based Reinforcement Learning reinforcement-learning +1

An Approximate Inference Approach to Temporal Optimization in Optimal Control

no code implementations NeurIPS 2010 Konrad Rawlik, Marc Toussaint, Sethu Vijayakumar

Algorithms based on iterative local approximations present a practical approach to optimal control in robotic systems.

Modelling motion primitives and their timing in biologically executed movements

no code implementations NeurIPS 2007 Ben Williams, Marc Toussaint, Amos J. Storkey

Inference of the shape and the timing of primitives can be done using a factorial HMM based model, allowing the handwriting to be represented in primitive timing space.

Rapidly-Exploring Quotient-Space Trees: Motion Planning using Sequential Simplifications

no code implementations4 Jun 2019 Andreas Orthey, Marc Toussaint

Motion planning problems can be simplified by admissible projections of the configuration space to sequences of lower-dimensional quotient-spaces, called sequential simplifications.

Motion Planning

Deep Workpiece Region Segmentation for Bin Picking

no code implementations8 Sep 2019 Muhammad Usman Khalid, Janik M. Hager, Werner Kraus, Marco F. Huber, Marc Toussaint

For most industrial bin picking solutions, the pose of a workpiece is localized by matching a CAD model to point cloud obtained from 3D sensor.

Pose Estimation

Prediction of Human Full-Body Movements with Motion Optimization and Recurrent Neural Networks

no code implementations4 Oct 2019 Philipp Kratzer, Marc Toussaint, Jim Mainprice

Human movement prediction is difficult as humans naturally exhibit complex behaviors that can change drastically from one environment to the next.

motion prediction

Qgraph-bounded Q-learning: Stabilizing Model-Free Off-Policy Deep Reinforcement Learning

no code implementations15 Jul 2020 Sabrina Hoppe, Marc Toussaint

By selecting a subgraph with a favorable structure, we construct a simplified Markov Decision Process for which exact Q-values can be computed efficiently as more data comes in.

Q-Learning reinforcement-learning +1

Plan-Based Asymptotically Equivalent Reward Shaping

no code implementations ICLR 2021 Ingmar Schubert, Ozgur S Oguz, Marc Toussaint

In high-dimensional state spaces, the usefulness of Reinforcement Learning (RL) is limited by the problem of exploration.

reinforcement-learning Reinforcement Learning (RL)

Visualization of Nonlinear Programming for Robot Motion Planning

no code implementations28 Jan 2021 David Hägele, Moataz Abdelaal, Ozgur S. Oguz, Marc Toussaint, Daniel Weiskopf

Nonlinear programming targets nonlinear optimization with constraints, which is a generic yet complex methodology involving humans for problem modeling and algorithms for problem solving.

Motion Planning Robotics Human-Computer Interaction Numerical Analysis Numerical Analysis H.5.2; G.1.6

Deep 6-DoF Tracking of Unknown Objects for Reactive Grasping

no code implementations9 Mar 2021 Marc Tuscher, Julian Hörz, Danny Driess, Marc Toussaint

We propose a robotic manipulation system, which is able to grasp a wide variety of formerly unseen objects and is robust against object perturbations and inferior grasping points.

Object Object Tracking +1

GraspME -- Grasp Manifold Estimator

no code implementations5 Jul 2021 Janik Hager, Ruben Bauer, Marc Toussaint, Jim Mainprice

To this extend, we define grasp manifolds via a set of key points and locate them in images using a Mask R-CNN backbone.

Keypoint Estimation

A System for Traded Control Teleoperation of Manipulation Tasks using Intent Prediction from Hand Gestures

no code implementations5 Jul 2021 Yoojin Oh, Marc Toussaint, Jim Mainprice

After presenting all the components of the system and their empirical evaluation, we present experimental results comparing our pipeline to a direct traded control approach (i. e., one that does not use prediction) which shows that using intent prediction allows to bring down the overall task execution time.

object-detection Object Detection

Plan-Based Relaxed Reward Shaping for Goal-Directed Tasks

no code implementations14 Jul 2021 Ingmar Schubert, Ozgur S. Oguz, Marc Toussaint

In high-dimensional state spaces, the usefulness of Reinforcement Learning (RL) is limited by the problem of exploration.

Reinforcement Learning (RL)

Learning Models as Functionals of Signed-Distance Fields for Manipulation Planning

no code implementations2 Oct 2021 Danny Driess, Jung-Su Ha, Marc Toussaint, Russ Tedrake

We show that representing objects as signed-distance fields not only enables to learn and represent a variety of models with higher accuracy compared to point-cloud and occupancy measure representations, but also that SDF-based models are suitable for optimization-based planning.

Learning Neural Implicit Functions as Object Representations for Robotic Manipulation

no code implementations29 Sep 2021 Jung-Su Ha, Danny Driess, Marc Toussaint

Robotic manipulation planning is the problem of finding a sequence of robot configurations that involves interactions with objects in the scene, e. g., grasp, placement, tool-use, etc.

Open-Ended Question Answering Robot Manipulation

Learning Multi-Object Dynamics with Compositional Neural Radiance Fields

no code implementations24 Feb 2022 Danny Driess, Zhiao Huang, Yunzhu Li, Russ Tedrake, Marc Toussaint

We present a method to learn compositional multi-object dynamics models from image observations based on implicit object encoders, Neural Radiance Fields (NeRFs), and graph neural networks.

Object

Reinforcement Learning with Neural Radiance Fields

no code implementations3 Jun 2022 Danny Driess, Ingmar Schubert, Pete Florence, Yunzhu Li, Marc Toussaint

This paper demonstrates that learning state representations with supervision from Neural Radiance Fields (NeRFs) can improve the performance of RL compared to other learned representations or even low-dimensional, hand-engineered state information.

reinforcement-learning Reinforcement Learning (RL)

Global Safe Sequential Learning via Efficient Knowledge Transfer

1 code implementation22 Feb 2024 Cen-You Li, Olaf Duennbier, Marc Toussaint, Barbara Rakitsch, Christoph Zimmer

As transferable source knowledge is often available in safety critical experiments, we propose to consider transfer safe sequential learning to accelerate the learning of safety.

Active Learning Bayesian Optimization +2

Cannot find the paper you are looking for? You can Submit a new open access paper.