Search Results for author: Franziska Meier

Found 29 papers, 9 papers with code

EgoAdapt: A multi-stream evaluation study of adaptation to real-world egocentric user video

1 code implementation11 Jul 2023 Matthias De Lange, Hamid Eghbalzadeh, Reuben Tan, Michael Iuzzolino, Franziska Meier, Karl Ridgeway

We introduce an evaluation framework that directly exploits the user's data stream with new metrics to measure the adaptation gain over the population model, online generalization, and hindsight performance.

Action Recognition Continual Learning

Neural Constraint Satisfaction: Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement

no code implementations20 Mar 2023 Michael Chang, Alyssa L. Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang

Object rearrangement is a challenge for embodied agents because solving these tasks requires generalizing across a combinatorially large set of configurations of entities and their locations.

Cross-Domain Transfer via Semantic Skill Imitation

no code implementations14 Dec 2022 Karl Pertsch, Ruta Desai, Vikash Kumar, Franziska Meier, Joseph J. Lim, Dhruv Batra, Akshara Rai

We propose an approach for semantic imitation, which uses demonstrations from a source domain, e. g. human videos, to accelerate reinforcement learning (RL) in a different target domain, e. g. a robotic manipulator in a simulated kitchen.

Reinforcement Learning (RL) Robot Manipulation

Model Based Meta Learning of Critics for Policy Gradients

no code implementations5 Apr 2022 Sarah Bechtle, Ludovic Righetti, Franziska Meier

In this paper we present a framework to meta-learn the critic for gradient-based policy learning.

Meta-Learning

Differentiable and Learnable Robot Models

1 code implementation22 Feb 2022 Franziska Meier, Austin Wang, Giovanni Sutanto, Yixin Lin, Paarth Shah

Building differentiable simulations of physical processes has recently received an increasing amount of attention.

Block Contextual MDPs for Continual Learning

no code implementations13 Oct 2021 Shagun Sodhani, Franziska Meier, Joelle Pineau, Amy Zhang

In this work, we propose to examine this continual reinforcement learning setting through the block contextual MDP (BC-MDP) framework, which enables us to relax the assumption of stationarity.

Continual Learning Generalization Bounds +2

Quasi-Equivalence Discovery for Zero-Shot Emergent Communication

no code implementations14 Mar 2021 Kalesha Bullard, Douwe Kiela, Franziska Meier, Joelle Pineau, Jakob Foerster

In contrast, in this work, we present a novel problem setting and the Quasi-Equivalence Discovery (QED) algorithm that allows for zero-shot coordination (ZSC), i. e., discovering protocols that can generalize to independently trained agents.

Multi-Modal Learning of Keypoint Predictive Models for Visual Object Manipulation

no code implementations8 Nov 2020 Sarah Bechtle, Neha Das, Franziska Meier

Our evaluation shows that our approach learns to consistently predict visual keypoints on objects in the manipulator's hand, and thus can easily facilitate learning an extended kinematic chain to include the object grasped in various configurations, from a few seconds of visual data.

Object

Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent Populations

no code implementations29 Oct 2020 Kalesha Bullard, Franziska Meier, Douwe Kiela, Joelle Pineau, Jakob Foerster

Indeed, emergent communication is now a vibrant field of research, with common settings involving discrete cheap-talk channels.

Model-Based Inverse Reinforcement Learning from Visual Demonstrations

no code implementations18 Oct 2020 Neha Das, Sarah Bechtle, Todor Davchev, Dinesh Jayaraman, Akshara Rai, Franziska Meier

Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem.

Model Predictive Control reinforcement-learning +1

Residual Learning from Demonstration: Adapting DMPs for Contact-rich Manipulation

no code implementations18 Aug 2020 Todor Davchev, Kevin Sebastian Luck, Michael Burke, Franziska Meier, Stefan Schaal, Subramanian Ramamoorthy

Dynamic Movement Primitives (DMP) are a popular way of extracting such policies through behaviour cloning (BC) but can struggle in the context of insertion.

Behavioural cloning Friction +1

Adversarial Continual Learning

1 code implementation ECCV 2020 Sayna Ebrahimi, Franziska Meier, Roberto Calandra, Trevor Darrell, Marcus Rohrbach

We show that shared features are significantly less prone to forgetting and propose a novel hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features required to solve a sequence of tasks.

Continual Learning Image Classification

Learning State-Dependent Losses for Inverse Dynamics Learning

1 code implementation10 Mar 2020 Kristen Morse, Neha Das, Yixin Lin, Austin S. Wang, Akshara Rai, Franziska Meier

In both settings, the structured and state-dependent learned losses improve online adaptation speed, when compared to standard, state-independent loss functions.

Meta-Learning

Generalized Inner Loop Meta-Learning

3 code implementations3 Oct 2019 Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, Soumith Chintala

Many (but not all) approaches self-qualifying as "meta-learning" in deep learning and reinforcement learning fit a common pattern of approximating the solution to a nested optimization problem.

Meta-Learning reinforcement-learning +1

Meta Learning via Learned Loss

no code implementations25 Sep 2019 Sarah Bechtle, Artem Molchanov, Yevgen Chebotar, Edward Grefenstette, Ludovic Righetti, Gaurav Sukhatme, Franziska Meier

We present a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures.

Meta-Learning reinforcement-learning +1

Meta-Learning via Learned Loss

1 code implementation12 Jun 2019 Sarah Bechtle, Artem Molchanov, Yevgen Chebotar, Edward Grefenstette, Ludovic Righetti, Gaurav Sukhatme, Franziska Meier

This information shapes the learned loss function such that the environment does not need to provide this information during meta-test time.

Meta-Learning

Curious iLQR: Resolving Uncertainty in Model-based RL

no code implementations15 Apr 2019 Sarah Bechtle, Yixin Lin, Akshara Rai, Ludovic Righetti, Franziska Meier

In this work, we propose a model-based reinforcement learning (MBRL) framework that combines Bayesian modeling of the system dynamics with curious iLQR, an iterative LQR approach that considers model uncertainty.

Model-based Reinforcement Learning reinforcement-learning +1

A Hierarchical Bayesian Linear Regression Model with Local Features for Stochastic Dynamics Approximation

no code implementations11 Jul 2018 Behnoosh Parsa, Keshav Rajasekaran, Franziska Meier, Ashis G. Banerjee

One of the challenges in model-based control of stochastic dynamical systems is that the state transition dynamics are involved, and it is not easy or efficient to make good-quality predictions of the states.

Model-based Reinforcement Learning regression

SE3-Pose-Nets: Structured Deep Dynamics Models for Visuomotor Planning and Control

no code implementations2 Oct 2017 Arunkumar Byravan, Felix Leeb, Franziska Meier, Dieter Fox

In this work, we present an approach to deep visuomotor control using structured deep dynamics models.

Online Learning of a Memory for Learning Rates

1 code implementation20 Sep 2017 Franziska Meier, Daniel Kappler, Stefan Schaal

The promise of learning to learn for robotics rests on the hope that by extracting some information about the learning process itself we can speed up subsequent similar learning tasks.

Meta-Learning

Robust Gaussian Filtering using a Pseudo Measurement

no code implementations14 Sep 2015 Manuel Wüthrich, Cristina Garcia Cifuentes, Sebastian Trimpe, Franziska Meier, Jeannette Bohg, Jan Issac, Stefan Schaal

The contribution of this paper is to show that any Gaussian filter can be made compatible with fat-tailed sensor models by applying one simple change: Instead of filtering with the physical measurement, we propose to filter with a pseudo measurement obtained by applying a feature function to the physical measurement.

Incremental Local Gaussian Regression

no code implementations NeurIPS 2014 Franziska Meier, Philipp Hennig, Stefan Schaal

Locally weighted regression (LWR) was created as a nonparametric method that can approximate a wide range of functions, is computationally efficient, and can learn continually from very large amounts of incrementally collected data.

regression

Local Gaussian Regression

no code implementations4 Feb 2014 Franziska Meier, Philipp Hennig, Stefan Schaal

Locally weighted regression was created as a nonparametric learning method that is computationally efficient, can learn from very large amounts of data and add data incrementally.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.