Search Results for author: Todd D. Murphey

Found 12 papers, 3 papers with code

Automated Gait Generation For Walking, Soft Robotic Quadrupeds

no code implementations30 Sep 2023 Jake Ketchum, Sophia Schiffer, Muchen Sun, Pranav Kaarthik, Ryan L. Truby, Todd D. Murphey

Gait generation for soft robots is challenging due to the nonlinear dynamics and high dimensional input spaces of soft actuators.

Maximum diffusion reinforcement learning

1 code implementation26 Sep 2023 Thomas A. Berrueta, Allison Pinosky, Todd D. Murphey

The assumption that data are independent and identically distributed underpins all machine learning.

Decision Making reinforcement-learning +1

Dynamical System Segmentation for Information Measures in Motion

no code implementations9 Dec 2020 Thomas A. Berrueta, Ana Pervan, Kathleen Fitzsimons, Todd D. Murphey

For a given task, we specify an optimal agent, and compute an alphabet of behaviors representative of the task.

Robotics

Learning from Human Directional Corrections

1 code implementation30 Nov 2020 Wanxin Jin, Todd D. Murphey, Zehui Lu, Shaoshuai Mou

This paper proposes a novel approach that enables a robot to learn an objective function incrementally from human directional corrections.

Motion Planning

Derivative-Based Koopman Operators for Real-Time Control of Robotic Systems

no code implementations12 Oct 2020 Giorgos Mamakoukas, Maria L. Castano, Xiaobo Tan, Todd D. Murphey

This paper presents a generalizable methodology for data-driven identification of nonlinear dynamics that bounds the model error in terms of the prediction horizon and the magnitude of the derivatives of the system states.

Learning from Sparse Demonstrations

2 code implementations5 Aug 2020 Wanxin Jin, Todd D. Murphey, Dana Kulić, Neta Ezer, Shaoshuai Mou

The time stamps of the keyframes can be different from the time of the robot's actual execution.

Motion Planning

An Ergodic Measure for Active Learning From Equilibrium

no code implementations5 Jun 2020 Ian Abraham, Ahalya Prabhakar, Todd D. Murphey

We show that our method is able to maintain Lyapunov attractiveness with respect to the equilibrium task while actively generating data for learning tasks such, as Bayesian optimization, model learning, and off-policy reinforcement learning.

Active Learning Robotics

Active Area Coverage from Equilibrium

no code implementations8 Feb 2019 Ian Abraham, Ahalya Prabhakar, Todd D. Murphey

This paper develops a method for robots to integrate stability into actively seeking out informative measurements through coverage.

Robotics

Decentralized Ergodic Control: Distribution-Driven Sensing and Exploration for Multi-Agent Systems

no code implementations13 Jun 2018 Ian Abraham, Todd D. Murphey

We present a decentralized ergodic control policy for time-varying area coverage problems for multiple agents with nonlinear dynamics.

Robotics Systems and Control

Data-Driven Measurement Models for Active Localization in Sparse Environments

no code implementations31 May 2018 Ian Abraham, Anastasia Mavrommati, Todd D. Murphey

Exploration with respect to the information density based on the data-driven measurement model enables localization.

Robotics

Ergodic Exploration using Binary Sensing for Non-Parametric Shape Estimation

no code implementations5 Sep 2017 Ian Abraham, Ahalya Prabhakar, Mitra J. Z. Hartmann, Todd D. Murphey

Current methods to estimate object shape---using either vision or touch---generally depend on high-resolution sensing.

Robotics

Real-Time Area Coverage and Target Localization using Receding-Horizon Ergodic Exploration

no code implementations28 Aug 2017 Anastasia Mavrommati, Emmanouil Tzorakoleftherakis, Ian Abraham, Todd D. Murphey

Although a number of solutions exist for the problems of coverage, search and target localization---commonly addressed separately---whether there exists a unified strategy that addresses these objectives in a coherent manner without being application-specific remains a largely open research question.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.