1 code implementation • 20 Nov 2019 • Thomas A. Henzinger, Anna Lukina, Christian Schilling
Neural networks have demonstrated unmatched performance in a range of classification tasks.
no code implementations • 25 May 2020 • Parand Alizadeh Alamdari, Guy Avni, Thomas A. Henzinger, Anna Lukina
Machine learning and formal methods have complimentary benefits and drawbacks.
1 code implementation • 14 Sep 2020 • Anna Lukina, Christian Schilling, Thomas A. Henzinger
To address this challenge, we introduce an algorithmic framework for active monitoring of a neural network.
1 code implementation • 13 Oct 2020 • Mathias Lechner, Ramin Hasani, Alexander Amini, Thomas A. Henzinger, Daniela Rus & Radu Grosu
A central goal of artificial intelligence in high-stakes decision-making applications is to design a single algorithm that simultaneously expresses generalizability by learning coherent representations of their world and interpretable explanations of its dynamics.
1 code implementation • 15 Dec 2020 • Thomas A. Henzinger, Mathias Lechner, Đorđe Žikelić
In this paper, we show that verifying the bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard, even though verifying idealized real-valued networks and satisfiability of bit-vector specifications alone are each in NP.
no code implementations • 15 Mar 2021 • Mathias Lechner, Ramin Hasani, Radu Grosu, Daniela Rus, Thomas A. Henzinger
Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations, with the cost of nominal performance drop.
1 code implementation • 18 Jul 2021 • Sophie Gruenbacher, Mathias Lechner, Ramin Hasani, Daniela Rus, Thomas A. Henzinger, Scott Smolka, Radu Grosu
Our algorithm solves a set of global optimization (Go) problems over a given time horizon to construct a tight enclosure (Tube) of the set of all process executions starting from a ball of initial states.
1 code implementation • NeurIPS 2021 • Mathias Lechner, Đorđe Žikelić, Krishnendu Chatterjee, Thomas A. Henzinger
Bayesian neural networks (BNNs) place distributions over the weights of a neural network to model uncertainty in the data and the network's prediction.
no code implementations • 17 Dec 2021 • Mathias Lechner, Đorđe Žikelić, Krishnendu Chatterjee, Thomas A. Henzinger
We consider the problem of formally verifying almost-sure (a. s.) asymptotic stability in discrete-time nonlinear stochastic control systems.
no code implementations • 15 Apr 2022 • Mathias Lechner, Alexander Amini, Daniela Rus, Thomas A. Henzinger
However, the improved robustness does not come for free but rather is accompanied by a decrease in overall model accuracy and performance.
no code implementations • 24 May 2022 • Đorđe Žikelić, Mathias Lechner, Krishnendu Chatterjee, Thomas A. Henzinger
In this work, we address the problem of learning provably stable neural network policies for stochastic control systems.
no code implementations • 2 Jun 2022 • Mathias Lechner, Ramin Hasani, Zahra Babaiee, Radu Grosu, Daniela Rus, Thomas A. Henzinger, Sepp Hochreiter
Residual mappings have been shown to perform representation learning in the first layers and iterative feature refinement in higher layers.
1 code implementation • 13 Jul 2022 • Miriam García Soto, Thomas A. Henzinger, Christian Schilling
We propose an algorithmic approach for synthesizing linear hybrid automata from time-series data.
no code implementations • 9 Oct 2022 • Mathias Lechner, Ramin Hasani, Alexander Amini, Tsun-Hsuan Wang, Thomas A. Henzinger, Daniela Rus
Our results imply that the causality gap can be solved in situation one with our proposed training guideline with any modern network architecture, whereas achieving out-of-distribution generalization (situation two) requires further investigations, for instance, on data diversity rather than the model architecture.
no code implementations • 11 Oct 2022 • Đorđe Žikelić, Mathias Lechner, Thomas A. Henzinger, Krishnendu Chatterjee
We study the problem of learning controllers for discrete-time non-linear stochastic dynamical systems with formal reach-avoid guarantees.
1 code implementation • 11 Oct 2022 • Matin Ansaripour, Krishnendu Chatterjee, Thomas A. Henzinger, Mathias Lechner, Đorđe Žikelić
We show that this procedure can also be adapted to formally verifying that, under a given Lipschitz continuous control policy, the stochastic system stabilizes within some stabilizing region with probability~$1$.
1 code implementation • 29 Nov 2022 • Mathias Lechner, Đorđe Žikelić, Krishnendu Chatterjee, Thomas A. Henzinger, Daniela Rus
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs).
no code implementations • 8 May 2023 • Thomas A. Henzinger, Mahyar Karimi, Konstantin Kueffner, Kaushik Mallik
Our goal is to build and deploy a monitor that will continuously observe a long sequence of events generated by the system in the wild, and will output, with each event, a verdict on how fair the system is at the current point in time.
no code implementations • 25 May 2023 • Thomas A. Henzinger, Mahyar Karimi, Konstantin Kueffner, Kaushik Mallik
While the frequentist monitors compute estimates that are objectively correct with respect to the ground truth, the Bayesian monitors compute estimates that are correct subject to a given prior belief about the system's model.
no code implementations • 1 Aug 2023 • Thomas A. Henzinger, Konstantin Kueffner, Kaushik Mallik
Moreover, they can monitor only fairness properties that are specified as arithmetic expressions over the probabilities of different events.
1 code implementation • NeurIPS 2023 • Đorđe Žikelić, Mathias Lechner, Abhinav Verma, Krishnendu Chatterjee, Thomas A. Henzinger
We also derive a tighter lower bound compared to previous work on the probability of reach-avoidance implied by a RASM, which is required to find a compositional policy with an acceptable probabilistic threshold for complex tasks with multiple edge policies.