In this work, we propose a novel safe and scalable decentralized solution for multi-agent control in the presence of stochastic disturbances.
It can be difficult to autonomously produce driver behavior so that it appears natural to other traffic participants.
The proposed method leverages a game theoretic differential dynamic programming approach with barrier states to handle parametric and non-parametric uncertainties in safety-critical control systems.
However, it remains unclear whether the optimization principle of SB relates to the modern training of deep generative models, which often rely on constructing log-likelihood objectives. This raises questions on the suitability of SB models as a principled alternative for generative applications.
Ranked #18 on Image Generation on CIFAR-10
We propose a novel second-order optimization framework for training the emerging deep continuous-time models, specifically the Neural Ordinary Differential Equations (Neural ODEs).
The control action composition is achieved by taking a weighted mixture of the existing controllers according to the contribution of each component task.
In this paper, we introduce a novel deep learning based solution to the Powered-Descent Guidance (PDG) problem, grounded in principles of nonlinear Stochastic Optimal Control (SOC) and Feynman-Kac theory.
One of the main challenges in autonomous robotic exploration and navigation in unknown and unstructured environments is determining where the robot can or cannot safely move.
This work proposes an optimal safe controller minimizing an infinite horizon cost functional subject to control barrier functions (CBFs) safety conditions.
The connection between training deep neural networks (DNNs) and optimal control theory (OCT) has attracted considerable attention as a principled tool of algorithmic design.
In this paper, we provide a generalized framework for Variational Inference-Stochastic Optimal Control by using thenon-extensive Tsallis divergence.
The development enforces safety by means of barrier functions used in optimization through the construction of barrier states (BaS) which are embedded in the control system's model.
Distributed algorithms for both discrete-time and continuous-time linearly solvable optimal control (LSOC) problems of networked multi-agent systems (MASs) are investigated in this paper.
We showcase superior performance of our framework over the state-of-the-art deep fictitious play algorithm on an inter-bank lending/borrowing problem in terms of multiple metrics.
Local control actions that rely only on agents' local observations are designed to optimize the joint cost functions of subsystems.
The proposed approach achieves both the compositionality and optimality of control actions simultaneously within the cooperative MAS framework in both discrete- and continuous-time in a sample-efficient manner, which reduces the burden of re-computation of the optimal control solutions for the new task on the MASs.
This paper introduces a new formulation for stochastic optimal control and stochastic dynamic optimization that ensures safety with respect to state and control constraints.
We present a general framework for optimizing the Conditional Value-at-Risk for dynamical systems using stochastic search.
Distributional Reinforcement Learning Optimization and Control Robotics
Connections between Deep Neural Networks (DNNs) training and optimal control theory has attracted considerable attention as a principled tool of algorithmic design.
In this work we propose the use of adaptive stochastic search as a building block for general, non-convex optimization operations within deep neural network architectures.
In this work, we present a method for obtaining an implicit objective function for vision-based navigation.
Interpretation of Deep Neural Networks (DNNs) training as an optimal control problem with nonlinear dynamical systems has received considerable attention recently, yet the algorithmic development remains relatively limited.
This uncertainty may come from errors in learning (due to a lack of data, for example), or may be inherent to the system.
In this work, we couple a model predictive control (MPC) framework to a visual pipeline.
We propose a framework which satisfies these constraints while allowing the use of deep neural networks for learning model uncertainties.
In this article, we provide one possible way to align existing branches of deep learning theory through the lens of dynamical system and optimal control.
We present a deep recurrent neural network architecture to solve a class of stochastic optimal control problems described by fully nonlinear Hamilton Jacobi Bellmanpartial differential equations.
We consider the problem of online adaptation of a neural network designed to represent vehicle dynamics.
The proposed information processing architecture is used to support a perceptual attention-based predictive control algorithm that leverages model predictive control (MPC), convolutional neural networks (CNNs), and uncertainty quantification methods.
When the model oracle is learned online, these algorithms can provably accelerate the best known convergence rate up to an order.
We propose the use of Bayesian networks, which provide both a mean value and an uncertainty estimate as output, to enhance the safety of learned control policies under circumstances in which a test-time input differs significantly from the training set.
The barrier certificates establish a non-conservative forward invariant safe region, in which high probability safety guarantees are provided based on the statistics of the Gaussian Process.
Sparse Spectrum Gaussian Processes (SSGPs) are a powerful tool for scaling Gaussian processes (GPs) to large datasets.
We present an information theoretic approach to stochastic optimal control problems that can be used to derive general sampling based optimization schemes.