Search Results for author: Prashant G. Mehta

Found 13 papers, 0 papers with code

Neural Models and Algorithms for Sensorimotor Control of an Octopus Arm

no code implementations2 Feb 2024 Tixian Wang, Udit Halder, Ekaterina Gribkova, Rhanor Gillette, Mattia Gazzola, Prashant G. Mehta

In this article, a biophysically realistic model of a soft octopus arm with internal musculature is presented.

A Survey of Feedback Particle Filter and related Controlled Interacting Particle Systems (CIPS)

no code implementations3 Jan 2023 Amirhossein Taghvaei, Prashant G. Mehta

In this survey, we describe controlled interacting particle systems (CIPS) to approximate the solution of the optimal filtering and the optimal control problems.

Modeling the Neuromuscular Control System of an Octopus Arm

no code implementations12 Nov 2022 Tixian Wang, Udit Halder, Ekaterina Gribkova, Mattia Gazzola, Prashant G. Mehta

The octopus arm is a neuromechanical system that involves a complex interplay between peripheral nervous system (PNS) and arm musculature.

Energy Shaping Control of a Muscular Octopus Arm Moving in Three Dimensions

no code implementations9 Sep 2022 Heng-Sheng Chang, Udit Halder, Chia-Hsien Shih, Noel Naughton, Mattia Gazzola, Prashant G. Mehta

Key contributions of this paper are: (i) modeling of major muscle groups to elicit three-dimensional movements; (ii) a mathematical formulation for muscle activations based on a stored energy function; and (iii) a computationally efficient procedure to design task-specific equilibrium configurations, obtained by solving an optimization problem in the Special Euclidean group SE(3).

A Sensory Feedback Control Law for Octopus Arm Movements

no code implementations1 Apr 2022 Tixian Wang, Udit Halder, Ekaterina Gribkova, Rhanor Gillette, Mattia Gazzola, Prashant G. Mehta

The main contribution of this paper is a novel sensory feedback control law for an octopus arm.

Controlled Interacting Particle Algorithms for Simulation-based Reinforcement Learning

no code implementations2 Jul 2021 Anant Joshi, Amirhossein Taghvaei, Prashant G. Mehta, Sean P. Meyn

This paper is concerned with optimal control problems for control systems in continuous time, and interacting particle system methods designed to construct approximate control solutions.

reinforcement-learning Reinforcement Learning (RL)

Convex Q-Learning, Part 1: Deterministic Optimal Control

no code implementations8 Aug 2020 Prashant G. Mehta, Sean P. Meyn

It is shown that in fact the algorithms are very different: while convex Q-learning solves a convex program that approximates the Bellman equation, theory for DQN is no stronger than for Watkins' algorithm with function approximation: (a) it is shown that both seek solutions to the same fixed point equation, and (b) the ODE approximations for the two algorithms coincide, and little is known about the stability of this ODE.

Q-Learning

An Optimal Transport Formulation of the Ensemble Kalman Filter

no code implementations5 Oct 2019 Amirhossein Taghvaei, Prashant G. Mehta

For this algorithm, the equations for empirical mean and covariance are derived and shown to be identical to the Kalman filter.

Accelerated Flow for Probability Distributions

no code implementations10 Jan 2019 Amirhossein Taghvaei, Prashant G. Mehta

al. 2016) from vector valued variables to probability distributions.

Accelerated Gradient Flow for Probability Distributions

no code implementations27 Sep 2018 Amirhossein Taghvaei, Prashant G. Mehta

In particular, we extend the recent variational formulation of accelerated gradient methods in wibisono2016 from vector valued variables to probability distributions.

How regularization affects the critical points in linear networks

no code implementations NeurIPS 2017 Amirhossein Taghvaei, Jin W. Kim, Prashant G. Mehta

The formulation is used to provide a complete characterization of the critical points in terms of the solutions of a nonlinear matrix-valued equation, referred to as the characteristic equation.

Cannot find the paper you are looking for? You can Submit a new open access paper.