We take a divide and conquer approach to design controllers for reachability problems given large-scale linear systems with polyhedral constraints on states, controls, and disturbances.
We develop a novel decentralized control method for a network of perturbed linear systems with dynamical couplings subject to Signal Temporal Logic (STL) specifications.
We present a computational framework for synthesis of distributed control strategies for a heterogeneous team of robots in a partially observable environment.
We first introduce the notion of a High Order Robust Adaptive Control Barrier Function (HO-RaCBF) as a means to compute control policies guaranteeing satisfaction of high relative degree safety constraints in the face of parametric model uncertainty.
We present a Deep Reinforcement Learning (DRL) algorithm for a task-guided robot with unknown continuous-time dynamics deployed in a large-scale complex environment.
In this paper, we introduce a time-incremental learning framework: given a dataset of labeled signal traces with a common time horizon, we propose a method to predict the label of a signal that is received incrementally over time, referred to as prefix signal.
In addition, given system requirements in the form of SVM-STL specifications, we provide an approach for parameter synthesis to find parameters that maximize the satisfaction of such specifications.
Our algorithm leverages an ensemble of Concise Decision Trees (CDTs) to improve the classification performance, where each CDT is a decision tree that is empowered by a set of techniques to generate simpler formulae and improve interpretability.
1 code implementation • 28 Jul 2021 • Bassam Helou, Aditya Dusi, Anne Collin, Noushin Mehdipour, Zhiliang Chen, Cristhian Lizarazo, Calin Belta, Tichakorn Wongpiromsarn, Radboud Duintjer Tebbens, Oscar Beijbom
First, we found that these rules were enough for these models to achieve a high classification accuracy on the dataset.
Many autonomous systems, such as robots and self-driving cars, involve real-time decision making in complex environments, and require prediction of future outcomes from limited data.
This paper develops a model based reinforcement learning (MBRL) framework for learning online the value function of an infinite-horizon optimal control problem while obeying safety constraints expressed as control barrier functions (CBFs).
We propose a framework for solving control synthesis problems for multi-agent networked systems required to satisfy spatio-temporal specifications.
To capture the history dependency of STL specifications, we use a recurrent neural network (RNN) to implement the control policy.
We define a HOCBF for a safety requirement on the unmodelled system based on the adaptive dynamics and error states, and reformulate the safety-critical control problem as the above mentioned QP.
The experimental results for a time-based trajectory show that the NMHE-NMPC framework with the proposed real-time iteration scheme gives better trajectory tracking performance than the ISL-LMPC framework and the required computation time is feasible for real-time applications.
In this paper we study the problem of synthesizing optimal control policies for uncertain continuous-time nonlinear systems from syntactically co-safe linear temporal logic (scLTL) formulas.
We develop optimal control strategies for Autonomous Vehicles (AVs) that are required to meet complex specifications imposed by traffic laws and cultural expectations of reasonable driving behavior.
Autonomous Driving Robotics Systems and Control Systems and Control
We propose a framework based on Recurrent Neural Networks (RNNs) to determine an optimal control strategy for a discrete-time system that is required to satisfy specifications given as Signal Temporal Logic (STL) formulae.
We compare the behaviors of IFFLs to negative autoregulatory loops, another sign-sensitive response-accelerating network motif, and find that increasing retroactivity in a negative autoregulated circuit can only slow the response.
The centralized QuickMatch algorithm is compared to other standard matching algorithms, while the Distributed QuickMatch algorithm is compared to the centralized algorithm in terms of preservation of match consistency.
We present a framework to synthesize control policies for nonlinear dynamical systems from complex temporal constraints specified in a rich temporal logic called Signal Temporal Logic (STL).
Systems and Control
Tasks with complex temporal structures and long horizons pose a challenge for reinforcement learning agents due to the difficulty in specifying the tasks in terms of reward functions as well as large variances in the learning signals.
An obstacle that prevents the wide adoption of (deep) reinforcement learning (RL) in control systems is its need for a large number of interactions with the environment in order to master a skill.
Skills learned through (deep) reinforcement learning often generalizes poorly across domains and re-training is necessary when presented with a new task.
We propose Truncated Linear Temporal Logic (TLTL) as specifications language, that is arguably well suited for the robotics applications, together with quantitative semantics, i. e., robustness degree.
This paper addresses the problem of learning optimal policies for satisfying signal temporal logic (STL) specifications by agents with unknown stochastic dynamics.
Systems and Control
Reinforcement learning has been applied to many interesting problems such as the famous TD-gammon and the inverted helicopter flight.
This paper introduces time window temporal logic (TWTL), a rich expressivity language for describing various time bounded specifications.
Formal Languages and Automata Theory Logic in Computer Science
We present a new temporal logic called Distribution Temporal Logic (DTL) defined over predicates of belief states and hidden states of partially observable systems.