Search Results for author: Thomas A. Henzinger

Found 21 papers, 10 papers with code

Neural circuit policies enabling auditable autonomy

1 code implementation13 Oct 2020 Mathias Lechner, Ramin Hasani, Alexander Amini, Thomas A. Henzinger, Daniela Rus & Radu Grosu

A central goal of artificial intelligence in high-stakes decision-making applications is to design a single algorithm that simultaneously expresses generalizability by learning coherent representations of their world and interpretable explanations of its dynamics.

Autonomous Vehicles Decision Making

GoTube: Scalable Stochastic Verification of Continuous-Depth Models

1 code implementation18 Jul 2021 Sophie Gruenbacher, Mathias Lechner, Ramin Hasani, Daniela Rus, Thomas A. Henzinger, Scott Smolka, Radu Grosu

Our algorithm solves a set of global optimization (Go) problems over a given time horizon to construct a tight enclosure (Tube) of the set of all process executions starting from a ball of initial states.

Into the Unknown: Active Monitoring of Neural Networks

1 code implementation14 Sep 2020 Anna Lukina, Christian Schilling, Thomas A. Henzinger

To address this challenge, we introduce an algorithmic framework for active monitoring of a neural network.

Scalable Verification of Quantized Neural Networks (Technical Report)

1 code implementation15 Dec 2020 Thomas A. Henzinger, Mathias Lechner, Đorđe Žikelić

In this paper, we show that verifying the bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard, even though verifying idealized real-valued networks and satisfiability of bit-vector specifications alone are each in NP.

Computational Efficiency Quantization

Formal Methods with a Touch of Magic

no code implementations25 May 2020 Parand Alizadeh Alamdari, Guy Avni, Thomas A. Henzinger, Anna Lukina

Machine learning and formal methods have complimentary benefits and drawbacks.

Adversarial Training is Not Ready for Robot Learning

no code implementations15 Mar 2021 Mathias Lechner, Ramin Hasani, Radu Grosu, Daniela Rus, Thomas A. Henzinger

Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations, with the cost of nominal performance drop.

Infinite Time Horizon Safety of Bayesian Neural Networks

1 code implementation NeurIPS 2021 Mathias Lechner, Đorđe Žikelić, Krishnendu Chatterjee, Thomas A. Henzinger

Bayesian neural networks (BNNs) place distributions over the weights of a neural network to model uncertainty in the data and the network's prediction.

reinforcement-learning Reinforcement Learning (RL) +1

Stability Verification in Stochastic Control Systems via Neural Network Supermartingales

no code implementations17 Dec 2021 Mathias Lechner, Đorđe Žikelić, Krishnendu Chatterjee, Thomas A. Henzinger

We consider the problem of formally verifying almost-sure (a. s.) asymptotic stability in discrete-time nonlinear stochastic control systems.

Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning

no code implementations15 Apr 2022 Mathias Lechner, Alexander Amini, Daniela Rus, Thomas A. Henzinger

However, the improved robustness does not come for free but rather is accompanied by a decrease in overall model accuracy and performance.

Adversarial Robustness Autonomous Driving +2

Learning Stabilizing Policies in Stochastic Control Systems

no code implementations24 May 2022 Đorđe Žikelić, Mathias Lechner, Krishnendu Chatterjee, Thomas A. Henzinger

In this work, we address the problem of learning provably stable neural network policies for stochastic control systems.

Entangled Residual Mappings

no code implementations2 Jun 2022 Mathias Lechner, Ramin Hasani, Zahra Babaiee, Radu Grosu, Daniela Rus, Thomas A. Henzinger, Sepp Hochreiter

Residual mappings have been shown to perform representation learning in the first layers and iterative feature refinement in higher layers.

Inductive Bias Representation Learning

Synthesis of Parametric Hybrid Automata from Time Series

1 code implementation13 Jul 2022 Miriam García Soto, Thomas A. Henzinger, Christian Schilling

We propose an algorithmic approach for synthesizing linear hybrid automata from time-series data.

Time Series Time Series Analysis

Are All Vision Models Created Equal? A Study of the Open-Loop to Closed-Loop Causality Gap

no code implementations9 Oct 2022 Mathias Lechner, Ramin Hasani, Alexander Amini, Tsun-Hsuan Wang, Thomas A. Henzinger, Daniela Rus

Our results imply that the causality gap can be solved in situation one with our proposed training guideline with any modern network architecture, whereas achieving out-of-distribution generalization (situation two) requires further investigations, for instance, on data diversity rather than the model architecture.

Autonomous Driving Image Classification +1

Learning Control Policies for Stochastic Systems with Reach-avoid Guarantees

no code implementations11 Oct 2022 Đorđe Žikelić, Mathias Lechner, Thomas A. Henzinger, Krishnendu Chatterjee

We study the problem of learning controllers for discrete-time non-linear stochastic dynamical systems with formal reach-avoid guarantees.

Learning Provably Stabilizing Neural Controllers for Discrete-Time Stochastic Systems

1 code implementation11 Oct 2022 Matin Ansaripour, Krishnendu Chatterjee, Thomas A. Henzinger, Mathias Lechner, Đorđe Žikelić

We show that this procedure can also be adapted to formally verifying that, under a given Lipschitz continuous control policy, the stochastic system stabilizes within some stabilizing region with probability~$1$.

Continuous Control

Runtime Monitoring of Dynamic Fairness Properties

no code implementations8 May 2023 Thomas A. Henzinger, Mahyar Karimi, Konstantin Kueffner, Kaushik Mallik

Our goal is to build and deploy a monitor that will continuously observe a long sequence of events generated by the system in the wild, and will output, with each event, a verdict on how fair the system is at the current point in time.

Decision Making Fairness

Monitoring Algorithmic Fairness

no code implementations25 May 2023 Thomas A. Henzinger, Mahyar Karimi, Konstantin Kueffner, Kaushik Mallik

While the frequentist monitors compute estimates that are objectively correct with respect to the ground truth, the Bayesian monitors compute estimates that are correct subject to a given prior belief about the system's model.

Fairness

Monitoring Algorithmic Fairness under Partial Observations

no code implementations1 Aug 2023 Thomas A. Henzinger, Konstantin Kueffner, Kaushik Mallik

Moreover, they can monitor only fairness properties that are specified as arithmetic expressions over the probabilities of different events.

Fairness

Compositional Policy Learning in Stochastic Control Systems with Formal Guarantees

1 code implementation NeurIPS 2023 Đorđe Žikelić, Mathias Lechner, Abhinav Verma, Krishnendu Chatterjee, Thomas A. Henzinger

We also derive a tighter lower bound compared to previous work on the probability of reach-avoidance implied by a RASM, which is required to find a compositional policy with an acceptable probabilistic threshold for complex tasks with multiple edge policies.

Cannot find the paper you are looking for? You can Submit a new open access paper.