no code implementations • ICML 2020 • Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, Radu Grosu
We propose a neural information processing system which is obtained by re-purposing the function of a biological neural circuit model to govern simulated and real-world control tasks.
1 code implementation • 10 May 2024 • Rom N. Parnichkun, Stefano Massaroli, Alessandro Moro, Jimmy T. H. Smith, Ramin Hasani, Mathias Lechner, Qi An, Christopher Ré, Hajime Asama, Stefano Ermon, Taiji Suzuki, Atsushi Yamashita, Michael Poli
We approach designing a state-space model for deep learning applications through its dual representation, the transfer function, and uncover a highly efficient sequence parallel inference algorithm that is state-free: unlike other proposed algorithms, state-free inference does not incur any significant memory or computational cost with an increase in state size.
1 code implementation • NeurIPS 2023 • Đorđe Žikelić, Mathias Lechner, Abhinav Verma, Krishnendu Chatterjee, Thomas A. Henzinger
We also derive a tighter lower bound compared to previous work on the probability of reach-avoidance implied by a RASM, which is required to find a compositional policy with an acceptable probabilistic threshold for complex tasks with multiple edge policies.
no code implementations • 21 Nov 2023 • Mónika Farsang, Mathias Lechner, David Lung, Ramin Hasani, Daniela Rus, Radu Grosu
In this work we aim to determine the impact of using chemical synapses compared to electrical synapses, in both sparse and all-to-all connected networks.
no code implementations • 5 Oct 2023 • Neehal Tumma, Mathias Lechner, Noel Loo, Ramin Hasani, Daniela Rus
In this work, we explore the application of recurrent neural networks to tasks of this nature and understand how a parameterization of their recurrent connectivity influences robustness in closed-loop settings.
no code implementations • 23 May 2023 • Alaa Maalouf, Murad Tukan, Noel Loo, Ramin Hasani, Mathias Lechner, Daniela Rus
Despite significant empirical progress in recent years, there is little understanding of the theoretical limitations/guarantees of dataset distillation, specifically, what excess risk is achieved by distillation compared to the original dataset, and how large are distilled datasets?
no code implementations • 21 Mar 2023 • Noam Buckman, Shiva Sreeram, Mathias Lechner, Yutong Ban, Ramin Hasani, Sertac Karaman, Daniela Rus
FailureNet observes the poses of vehicles as they approach an intersection and detects whether a failure is present in the autonomy stack, warning cross-traffic of potentially dangerous drivers.
2 code implementations • 13 Feb 2023 • Noel Loo, Ramin Hasani, Mathias Lechner, Daniela Rus
We propose a new dataset distillation algorithm using reparameterization and convexification of implicit gradients (RCIG), that substantially improves the state-of-the-art.
Ranked #1 on Dataset Distillation - 1IPC on TinyImageNet
1 code implementation • 2 Feb 2023 • Noel Loo, Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus
We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset, and that these reconstruction attacks can be used for \textit{dataset distillation}, that is, we can retrain on reconstructed images and obtain high predictive accuracy.
no code implementations • 21 Dec 2022 • Lianhao Yin, Makram Chahine, Tsun-Hsuan Wang, Tim Seyde, Chao Liu, Mathias Lechner, Ramin Hasani, Daniela Rus
We propose an air-guardian system that facilitates cooperation between a pilot with eye tracking and a parallel end-to-end neural control system.
1 code implementation • 29 Nov 2022 • Mathias Lechner, Đorđe Žikelić, Krishnendu Chatterjee, Thomas A. Henzinger, Daniela Rus
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs).
no code implementations • 11 Oct 2022 • Đorđe Žikelić, Mathias Lechner, Thomas A. Henzinger, Krishnendu Chatterjee
We study the problem of learning controllers for discrete-time non-linear stochastic dynamical systems with formal reach-avoid guarantees.
1 code implementation • 11 Oct 2022 • Matin Ansaripour, Krishnendu Chatterjee, Thomas A. Henzinger, Mathias Lechner, Đorđe Žikelić
We show that this procedure can also be adapted to formally verifying that, under a given Lipschitz continuous control policy, the stochastic system stabilizes within some stabilizing region with probability~$1$.
no code implementations • 10 Oct 2022 • Wei Xiao, Tsun-Hsuan Wang, Ramin Hasani, Mathias Lechner, Yutong Ban, Chuang Gan, Daniela Rus
We propose a new method to ensure neural ordinary differential equations (ODEs) satisfy output specifications by using invariance set propagation.
2 code implementations • 10 Oct 2022 • Mathias Lechner, Ramin Hasani, Philipp Neubauer, Sophie Neubauer, Daniela Rus
Hyperparameter tuning is a fundamental aspect of machine learning research.
no code implementations • 9 Oct 2022 • Mathias Lechner, Ramin Hasani, Alexander Amini, Tsun-Hsuan Wang, Thomas A. Henzinger, Daniela Rus
Our results imply that the causality gap can be solved in situation one with our proposed training guideline with any modern network architecture, whereas achieving out-of-distribution generalization (situation two) requires further investigations, for instance, on data diversity rather than the model architecture.
1 code implementation • 26 Sep 2022 • Ramin Hasani, Mathias Lechner, Tsun-Hsuan Wang, Makram Chahine, Alexander Amini, Daniela Rus
A proper parametrization of state transition matrices of linear state-space models (SSMs) followed by standard nonlinearities enables them to efficiently learn representations from sequential data, establishing the state-of-the-art on a large series of long-range sequence modeling benchmarks.
Ranked #1 on SpO2 estimation on BIDMC
no code implementations • 2 Jun 2022 • Mathias Lechner, Ramin Hasani, Zahra Babaiee, Radu Grosu, Daniela Rus, Thomas A. Henzinger, Sepp Hochreiter
Residual mappings have been shown to perform representation learning in the first layers and iterative feature refinement in higher layers.
no code implementations • 24 May 2022 • Đorđe Žikelić, Mathias Lechner, Krishnendu Chatterjee, Thomas A. Henzinger
In this work, we address the problem of learning provably stable neural network policies for stochastic control systems.
no code implementations • 15 Apr 2022 • Mathias Lechner, Alexander Amini, Daniela Rus, Thomas A. Henzinger
However, the improved robustness does not come for free but rather is accompanied by a decrease in overall model accuracy and performance.
no code implementations • 17 Dec 2021 • Mathias Lechner, Đorđe Žikelić, Krishnendu Chatterjee, Thomas A. Henzinger
We consider the problem of formally verifying almost-sure (a. s.) asymptotic stability in discrete-time nonlinear stochastic control systems.
1 code implementation • NeurIPS 2021 • Mathias Lechner, Đorđe Žikelić, Krishnendu Chatterjee, Thomas A. Henzinger
Bayesian neural networks (BNNs) place distributions over the weights of a neural network to model uncertainty in the data and the network's prediction.
1 code implementation • 14 Oct 2021 • Stefan Sietzen, Mathias Lechner, Judy Borowski, Ramin Hasani, Manuela Waldner
While convolutional neural networks (CNNs) have found wide adoption as state-of-the-art models for image-related tasks, their predictions are often highly sensitive to small input perturbations, which the human vision is robust against.
no code implementations • 29 Sep 2021 • Mathias Lechner, Ramin Hasani
These models, however, face difficulties when the input data possess long-term dependencies.
1 code implementation • 18 Jul 2021 • Sophie Gruenbacher, Mathias Lechner, Ramin Hasani, Daniela Rus, Thomas A. Henzinger, Scott Smolka, Radu Grosu
Our algorithm solves a set of global optimization (Go) problems over a given time horizon to construct a tight enclosure (Tube) of the set of all process executions starting from a ball of initial states.
1 code implementation • 25 Jun 2021 • Ramin Hasani, Mathias Lechner, Alexander Amini, Lucas Liebenwein, Aaron Ray, Max Tschaikowski, Gerald Teschl, Daniela Rus
To this end, we compute a tightly-bounded approximation of the solution of an integral appearing in LTCs' dynamics, that has had no known closed-form solution so far.
Ranked #38 on Sentiment Analysis on IMDb
1 code implementation • NeurIPS 2021 • Charles Vorbach, Ramin Hasani, Alexander Amini, Mathias Lechner, Daniela Rus
We evaluate our method in the context of visual-control learning of drones over a series of complex tasks, ranging from short- and long-term navigation, to chasing static and dynamic objects through photorealistic environments.
1 code implementation • 13 Jun 2021 • Zahra Babaiee, Ramin Hasani, Mathias Lechner, Daniela Rus, Radu Grosu
Robustness to variations in lighting conditions is a key objective for any deep vision system.
no code implementations • 15 Mar 2021 • Mathias Lechner, Ramin Hasani, Radu Grosu, Daniela Rus, Thomas A. Henzinger
Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations, with the cost of nominal performance drop.
1 code implementation • 8 Mar 2021 • Axel Brunnbauer, Luigi Berducci, Andreas Brandstätter, Mathias Lechner, Ramin Hasani, Daniela Rus, Radu Grosu
World models learn behaviors in a latent imagination space to enhance the sample-efficiency of deep reinforcement learning (RL) algorithms.
no code implementations • 16 Dec 2020 • Sophie Gruenbacher, Ramin Hasani, Mathias Lechner, Jacek Cyranka, Scott A. Smolka, Radu Grosu
We show that Neural ODEs, an emerging class of time-continuous neural networks, can be verified by solving a set of global-optimization problems.
1 code implementation • 15 Dec 2020 • Thomas A. Henzinger, Mathias Lechner, Đorđe Žikelić
In this paper, we show that verifying the bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard, even though verifying idealized real-valued networks and satisfiability of bit-vector specifications alone are each in NP.
1 code implementation • 14 Dec 2020 • Sophie Gruenbacher, Jacek Cyranka, Mathias Lechner, Md. Ariful Islam, Scott A. Smolka, Radu Grosu
Secondly, it computes the next reachset as the intersection of two balls: one based on the Cartesian metric and the other on the new metric.
1 code implementation • 13 Oct 2020 • Mathias Lechner, Ramin Hasani, Alexander Amini, Thomas A. Henzinger, Daniela Rus & Radu Grosu
A central goal of artificial intelligence in high-stakes decision-making applications is to design a single algorithm that simultaneously expresses generalizability by learning coherent representations of their world and interpretable explanations of its dynamics.
2 code implementations • NeurIPS 2020 • Mathias Lechner, Ramin Hasani
These models, however, face difficulties when the input data possess long-term dependencies.
Ranked #10 on Sequential Image Classification on Sequential MNIST
4 code implementations • 8 Jun 2020 • Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, Radu Grosu
We introduce a new class of time-continuous recurrent neural network models.
1 code implementation • ICLR 2020 • Mathias Lechner
The family of feedback alignment (FA) algorithms aims to provide a more biologically motivated alternative to backpropagation (BP), by substituting the computations that are unrealistic to be implemented in physical brains.
no code implementations • 1 Nov 2018 • Ramin M. Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, Radu Grosu
In this paper, we introduce the notion of liquid time-constant (LTC) recurrent neural networks (RNN)s, a subclass of continuous-time RNNs, with varying neuronal time-constant realized by their nonlinear synaptic transmission model.
no code implementations • 11 Sep 2018 • Ramin M. Hasani, Alexander Amini, Mathias Lechner, Felix Naser, Radu Grosu, Daniela Rus
In this paper, we introduce a novel method to interpret recurrent neural networks (RNNs), particularly long short-term memory networks (LSTMs) at the cellular level.
1 code implementation • 11 Sep 2018 • Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, Radu Grosu
Inspired by the structure of the nervous system of the soil-worm, C. elegans, we introduce Neuronal Circuit Policies (NCPs), defined as the model of biological neural circuits reparameterized for the control of an alternative task.
1 code implementation • 22 Mar 2018 • Mathias Lechner, Ramin M. Hasani, Radu Grosu
We propose an effective way to create interpretable control agents, by re-purposing the function of a biological neural circuit model, to govern simulated and real world reinforcement learning (RL) test-beds.
no code implementations • 9 Nov 2017 • Mathias Lechner, Radu Grosu, Ramin M. Hasani
We model the tap-withdrawal (TW) neural circuit of the nematode, \textit{C. elegans}, a circuit responsible for the worm's reflexive response to external mechanical touch stimulations, and learn its synaptic and neural parameters as a policy for controlling the inverted pendulum problem.