Search Results for author: Thomas A. Henzinger

Found 11 papers, 6 papers with code

Learning Stabilizing Policies in Stochastic Control Systems

no code implementations24 May 2022 Đorđe Žikelić, Mathias Lechner, Krishnendu Chatterjee, Thomas A. Henzinger

In this work, we address the problem of learning provably stable neural network policies for stochastic control systems.

Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning

no code implementations15 Apr 2022 Mathias Lechner, Alexander Amini, Daniela Rus, Thomas A. Henzinger

This work revisits the robustness-accuracy trade-off in robot learning by systematically analyzing if recent advances in robust training methods and theory in conjunction with adversarial robot learning can make adversarial training suitable for real-world robot applications.

Adversarial Robustness Autonomous Driving +1

Stability Verification in Stochastic Control Systems via Neural Network Supermartingales

no code implementations17 Dec 2021 Mathias Lechner, Đorđe Žikelić, Krishnendu Chatterjee, Thomas A. Henzinger

We consider the problem of formally verifying almost-sure (a. s.) asymptotic stability in discrete-time nonlinear stochastic control systems.

Infinite Time Horizon Safety of Bayesian Neural Networks

1 code implementation NeurIPS 2021 Mathias Lechner, Đorđe Žikelić, Krishnendu Chatterjee, Thomas A. Henzinger

Bayesian neural networks (BNNs) place distributions over the weights of a neural network to model uncertainty in the data and the network's prediction.

reinforcement-learning Safe Exploration

GoTube: Scalable Stochastic Verification of Continuous-Depth Models

1 code implementation18 Jul 2021 Sophie Gruenbacher, Mathias Lechner, Ramin Hasani, Daniela Rus, Thomas A. Henzinger, Scott Smolka, Radu Grosu

Our algorithm solves a set of global optimization (Go) problems over a given time horizon to construct a tight enclosure (Tube) of the set of all process executions starting from a ball of initial states.

Adversarial Training is Not Ready for Robot Learning

no code implementations15 Mar 2021 Mathias Lechner, Ramin Hasani, Radu Grosu, Daniela Rus, Thomas A. Henzinger

Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations, with the cost of nominal performance drop.

Scalable Verification of Quantized Neural Networks (Technical Report)

1 code implementation15 Dec 2020 Thomas A. Henzinger, Mathias Lechner, Đorđe Žikelić

In this paper, we show that verifying the bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard, even though verifying idealized real-valued networks and satisfiability of bit-vector specifications alone are each in NP.

Quantization

Neural circuit policies enabling auditable autonomy

1 code implementation13 Oct 2020 Mathias Lechner, Ramin Hasani, Alexander Amini, Thomas A. Henzinger, Daniela Rus & Radu Grosu

A central goal of artificial intelligence in high-stakes decision-making applications is to design a single algorithm that simultaneously expresses generalizability by learning coherent representations of their world and interpretable explanations of its dynamics.

Autonomous Vehicles Decision Making

Into the Unknown: Active Monitoring of Neural Networks

1 code implementation14 Sep 2020 Anna Lukina, Christian Schilling, Thomas A. Henzinger

To address this challenge, we introduce an algorithmic framework for active monitoring of a neural network.

Formal Methods with a Touch of Magic

no code implementations25 May 2020 Parand Alizadeh Alamdari, Guy Avni, Thomas A. Henzinger, Anna Lukina

Machine learning and formal methods have complimentary benefits and drawbacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.