Search Results for author: Jean-Jacques Slotine

Found 21 papers, 6 papers with code

A Quorum Sensing Inspired Algorithm for Dynamic Clustering

no code implementations16 Mar 2013 Feng Tan, Jean-Jacques Slotine

The algorithm treats each data as a single cell, and uses knowledge of local connectivity to cluster cells into multiple colonies simultaneously.

Clustering Community Detection +2

Notes on stable learning with piecewise-linear basis functions

no code implementations25 Apr 2018 Winfried Lohmiller, Philipp Gassert, Jean-Jacques Slotine

We discuss technical results on learning function approximations using piecewise-linear basis functions, and analyze their stability and convergence using nonlinear contraction theory.

Time Dependence in Non-Autonomous Neural ODEs

no code implementations ICLR Workshop DeepDiffEq 2019 Jared Quincy Davis, Krzysztof Choromanski, Jake Varley, Honglak Lee, Jean-Jacques Slotine, Valerii Likhosterov, Adrian Weller, Ameesh Makadia, Vikas Sindhwani

Neural Ordinary Differential Equations (ODEs) are elegant reinterpretations of deep networks where continuous time can replace the discrete notion of depth, ODE solvers perform forward propagation, and the adjoint method enables efficient, constant memory backpropagation.

Image Classification Video Prediction

An Ode to an ODE

no code implementations NeurIPS 2020 Krzysztof Choromanski, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, Vikas Sindhwani

We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d).

Ode to an ODE

no code implementations NeurIPS 2020 Krzysztof M. Choromanski, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, Vikas Sindhwani

We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d).

Learning-based Adaptive Control using Contraction Theory

no code implementations4 Mar 2021 Hiroyasu Tsukamoto, Soon-Jo Chung, Jean-Jacques Slotine

Adaptive control is subject to stability and performance issues when a learned model is used to enhance its performance.

Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems

1 code implementation7 Mar 2021 Spencer M. Richards, Navid Azizan, Jean-Jacques Slotine, Marco Pavone

Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments.

Meta-Learning regression

Dynamical Pose Estimation

1 code implementation ICCV 2021 Heng Yang, Chris Doran, Jean-Jacques Slotine

We study the problem of aligning two sets of 3D geometric primitives given known correspondences.

Point Cloud Registration Pose Estimation

A Theoretical Overview of Neural Contraction Metrics for Learning-based Control with Guaranteed Stability

no code implementations2 Oct 2021 Hiroyasu Tsukamoto, Soon-Jo Chung, Jean-Jacques Slotine, Chuchu Fan

This paper presents a theoretical overview of a Neural Contraction Metric (NCM): a neural network model of an optimal contraction metric and corresponding differential Lyapunov function, the existence of which is a necessary and sufficient condition for incremental exponential stability of non-autonomous nonlinear system trajectories.

Control-oriented meta-learning

1 code implementation14 Apr 2022 Spencer M. Richards, Navid Azizan, Jean-Jacques Slotine, Marco Pavone

Real-time adaptation is imperative to the control of robots operating in complex, dynamic environments.

Meta-Learning regression

Unmatched Control Barrier Functions: Certainty Equivalence Adaptive Safety

no code implementations28 Jul 2022 Brett T. Lopez, Jean-Jacques Slotine

This work applies universal adaptive control to control barrier functions to achieve forward invariance of a safe set despite the presence of unmatched parametric uncertainties.

Learning Control-Oriented Dynamical Structure from Data

1 code implementation6 Feb 2023 Spencer M. Richards, Jean-Jacques Slotine, Navid Azizan, Marco Pavone

Even for known nonlinear dynamical systems, feedback controller synthesis is a difficult problem that often requires leveraging the particular structure of the dynamics to induce a stable closed-loop system.

Scaling Spherical CNNs

1 code implementation8 Jun 2023 Carlos Esteves, Jean-Jacques Slotine, Ameesh Makadia

Spherical CNNs generalize CNNs to functions on the sphere, by using spherical convolutions as the main linear operation.

Weather Forecasting

MinMax Networks

no code implementations15 Jun 2023 Winfried Lohmiller, Philipp Gassert, Jean-Jacques Slotine

Global exponential convergence of the algorithm is established using Contraction Theory with Inequality Constraints, which is extended from the continuous to the discrete case in this paper: The parametrization of each linear function piece is, in contrast to deep learning, linear in the proposed MinMax network.

$k$-Contraction in a Generalized Lurie System

no code implementations14 Sep 2023 Ron Ofir, Jean-Jacques Slotine, Michael Margaliot

We derive a sufficient condition for $k$-contraction in a generalized Lurie system (GLS), that is, the feedback connection of a nonlinear dynamical system and a memoryless nonlinear function.

Contraction Properties of the Global Workspace Primitive

no code implementations2 Oct 2023 Michaela Ennis, Leo Kozachkov, Jean-Jacques Slotine

To push forward the important emerging research field surrounding multi-area recurrent neural networks (RNNs), we expand theoretically and empirically on the provably stable RNNs of RNNs introduced by Kozachkov et al. in "RNNs of RNNs: Recursive Construction of Stable Assemblies of Recurrent Neural Networks".

Overall - Test

Stable Modular Control via Contraction Theory for Reinforcement Learning

no code implementations7 Nov 2023 Bing Song, Jean-Jacques Slotine, Quang-Cuong Pham

We propose a novel way to integrate control techniques with reinforcement learning (RL) for stability, robustness, and generalization: leveraging contraction theory to realize modularity in neural control, which ensures that combining stable subsystems can automatically preserve the stability.

reinforcement-learning Reinforcement Learning (RL)

Dynamic Adaptation Gains for Nonlinear Systems with Unmatched Uncertainties

no code implementations9 Nov 2023 Brett T. Lopez, Jean-Jacques Slotine

We present a new direct adaptive control approach for nonlinear systems with unmatched and matched uncertainties.

Neuron-Astrocyte Associative Memory

no code implementations14 Nov 2023 Leo Kozachkov, Jean-Jacques Slotine, Dmitry Krotov

Such multi-neuron synapses are ubiquitous in models of Dense Associative Memory (also known as Modern Hopfield Networks) and are known to lead to superlinear memory storage capacity, which is a desirable computational feature.

Cannot find the paper you are looking for? You can Submit a new open access paper.