no code implementations • ICML 2020 • Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, Radu Grosu
We propose a neural information processing system which is obtained by re-purposing the function of a biological neural circuit model to govern simulated and real-world control tasks.
no code implementations • 18 Sep 2024 • Fouad Makiyeh, Mark Bastourous, Anass Bairouk, Wei Xiao, Mirjana Maras, Tsun-Hsuan Wangb, Marc Blanchon, Ramin Hasani, Patrick Chareyre, Daniela Rus
This demonstrates the potential of optical flow data, combined with advanced neural network architectures (a CNN-based structure for fusing data and a Recurrence-based network for inferring a command from latent space), to enhance the performance of autonomous vehicles steering estimation.
no code implementations • 16 Sep 2024 • Huy-Dung Nguyen, Anass Bairouk, Mirjana Maras, Wei Xiao, Tsun-Hsuan Wang, Patrick Chareyre, Ramin Hasani, Marc Blanchon, Daniela Rus
While our performance in steering angle estimation is comparable to existing methods, the integration of human-like perception through multi-task learning holds significant potential for advancing autonomous driving systems.
no code implementations • 21 Jun 2024 • Alex Quach, Makram Chahine, Alexander Amini, Ramin Hasani, Daniela Rus
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, crafty programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
1 code implementation • 10 May 2024 • Rom N. Parnichkun, Stefano Massaroli, Alessandro Moro, Jimmy T. H. Smith, Ramin Hasani, Mathias Lechner, Qi An, Christopher Ré, Hajime Asama, Stefano Ermon, Taiji Suzuki, Atsushi Yamashita, Michael Poli
We approach designing a state-space model for deep learning applications through its dual representation, the transfer function, and uncover a highly efficient sequence parallel inference algorithm that is state-free: unlike other proposed algorithms, state-free inference does not incur any significant memory or computational cost with an increase in state size.
no code implementations • 2 Apr 2024 • Anass Bairouk, Mirjana Maras, Simon Herlin, Alexander Amini, Marc Blanchon, Ramin Hasani, Patrick Chareyre, Daniela Rus
Autonomous driving presents a complex challenge, which is usually addressed with artificial intelligence models that are end-to-end or modular in nature.
no code implementations • 21 Nov 2023 • Mónika Farsang, Mathias Lechner, David Lung, Ramin Hasani, Daniela Rus, Radu Grosu
In this work we aim to determine the impact of using chemical synapses compared to electrical synapses, in both sparse and all-to-all connected networks.
no code implementations • 5 Oct 2023 • Neehal Tumma, Mathias Lechner, Noel Loo, Ramin Hasani, Daniela Rus
In this work, we explore the application of recurrent neural networks to tasks of this nature and understand how a parameterization of their recurrent connectivity influences robustness in closed-loop settings.
no code implementations • 23 May 2023 • Alaa Maalouf, Murad Tukan, Noel Loo, Ramin Hasani, Mathias Lechner, Daniela Rus
Despite significant empirical progress in recent years, there is little understanding of the theoretical limitations/guarantees of dataset distillation, specifically, what excess risk is achieved by distillation compared to the original dataset, and how large are distilled datasets?
no code implementations • 5 Apr 2023 • Tsun-Hsuan Wang, Wei Xiao, Makram Chahine, Alexander Amini, Ramin Hasani, Daniela Rus
Modern end-to-end learning systems can learn to explicitly infer control from perception.
no code implementations • 21 Mar 2023 • Noam Buckman, Shiva Sreeram, Mathias Lechner, Yutong Ban, Ramin Hasani, Sertac Karaman, Daniela Rus
FailureNet observes the poses of vehicles as they approach an intersection and detects whether a failure is present in the autonomy stack, warning cross-traffic of potentially dangerous drivers.
2 code implementations • 13 Feb 2023 • Noel Loo, Ramin Hasani, Mathias Lechner, Daniela Rus
We propose a new dataset distillation algorithm using reparameterization and convexification of implicit gradients (RCIG), that substantially improves the state-of-the-art.
Ranked #1 on Dataset Distillation - 1IPC on TinyImageNet
1 code implementation • 2 Feb 2023 • Noel Loo, Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus
We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset, and that these reconstruction attacks can be used for \textit{dataset distillation}, that is, we can retrain on reconstructed images and obtain high predictive accuracy.
no code implementations • 21 Dec 2022 • Lianhao Yin, Makram Chahine, Tsun-Hsuan Wang, Tim Seyde, Chao Liu, Mathias Lechner, Ramin Hasani, Daniela Rus
We propose an air-guardian system that facilitates cooperation between a pilot with eye tracking and a parallel end-to-end neural control system.
1 code implementation • 21 Oct 2022 • Noel Loo, Ramin Hasani, Alexander Amini, Daniela Rus
In this limit, the kernel is frozen, and the underlying feature map is fixed.
2 code implementations • 21 Oct 2022 • Noel Loo, Ramin Hasani, Alexander Amini, Daniela Rus
Dataset distillation compresses large datasets into smaller synthetic coresets which retain performance with the aim of reducing the storage and computational burden of processing the entire dataset.
no code implementations • 20 Oct 2022 • Zahra Babaiee, Lucas Liebenwein, Ramin Hasani, Daniela Rus, Radu Grosu
On CIFAR-10 dataset, without requiring a pre-trained baseline network, we obtain 1. 02% and 1. 19% accuracy gain and 52. 3% and 54% parameters reduction, on ResNet56 and ResNet110, respectively.
no code implementations • 13 Oct 2022 • Tsun-Hsuan Wang, Wei Xiao, Tim Seyde, Ramin Hasani, Daniela Rus
The advancement of robots, particularly those functioning in complex human-centric environments, relies on control solutions that are driven by machine learning.
2 code implementations • 10 Oct 2022 • Mathias Lechner, Ramin Hasani, Philipp Neubauer, Sophie Neubauer, Daniela Rus
Hyperparameter tuning is a fundamental aspect of machine learning research.
no code implementations • 10 Oct 2022 • Wei Xiao, Tsun-Hsuan Wang, Ramin Hasani, Mathias Lechner, Yutong Ban, Chuang Gan, Daniela Rus
We propose a new method to ensure neural ordinary differential equations (ODEs) satisfy output specifications by using invariance set propagation.
no code implementations • 9 Oct 2022 • Mathias Lechner, Ramin Hasani, Alexander Amini, Tsun-Hsuan Wang, Thomas A. Henzinger, Daniela Rus
Our results imply that the causality gap can be solved in situation one with our proposed training guideline with any modern network architecture, whereas achieving out-of-distribution generalization (situation two) requires further investigations, for instance, on data diversity rather than the model architecture.
1 code implementation • 26 Sep 2022 • Ramin Hasani, Mathias Lechner, Tsun-Hsuan Wang, Makram Chahine, Alexander Amini, Daniela Rus
A proper parametrization of state transition matrices of linear state-space models (SSMs) followed by standard nonlinearities enables them to efficiently learn representations from sequential data, establishing the state-of-the-art on a large series of long-range sequence modeling benchmarks.
Ranked #1 on SpO2 estimation on BIDMC
no code implementations • 2 Jun 2022 • Mathias Lechner, Ramin Hasani, Zahra Babaiee, Radu Grosu, Daniela Rus, Thomas A. Henzinger, Sepp Hochreiter
Residual mappings have been shown to perform representation learning in the first layers and iterative feature refinement in higher layers.
no code implementations • 15 Apr 2022 • Zahra Babaiee, Lucas Liebenwein, Ramin Hasani, Daniela Rus, Radu Grosu
Moreover, by training the pruning scores of all layers simultaneously our method can account for layer interdependencies, which is essential to find a performant sparse sub-network.
no code implementations • 4 Mar 2022 • Wei Xiao, Tsun-Hsuan Wang, Makram Chahine, Alexander Amini, Ramin Hasani, Daniela Rus
They are interpretable at scale, achieve great test performance under limited training data, and are safety guaranteed in a series of autonomous driving scenarios such as lane keeping and obstacle avoidance.
no code implementations • 22 Nov 2021 • Wei Xiao, Ramin Hasani, Xiao Li, Daniela Rus
This paper introduces differentiable higher-order control barrier functions (CBF) that are end-to-end trainable together with learning systems.
1 code implementation • 14 Oct 2021 • Stefan Sietzen, Mathias Lechner, Judy Borowski, Ramin Hasani, Manuela Waldner
While convolutional neural networks (CNNs) have found wide adoption as state-of-the-art models for image-related tasks, their predictions are often highly sensitive to small input perturbations, which the human vision is robust against.
no code implementations • 29 Sep 2021 • Mathias Lechner, Ramin Hasani
These models, however, face difficulties when the input data possess long-term dependencies.
1 code implementation • 18 Jul 2021 • Sophie Gruenbacher, Mathias Lechner, Ramin Hasani, Daniela Rus, Thomas A. Henzinger, Scott Smolka, Radu Grosu
Our algorithm solves a set of global optimization (Go) problems over a given time horizon to construct a tight enclosure (Tube) of the set of all process executions starting from a ball of initial states.
1 code implementation • 25 Jun 2021 • Ramin Hasani, Mathias Lechner, Alexander Amini, Lucas Liebenwein, Aaron Ray, Max Tschaikowski, Gerald Teschl, Daniela Rus
To this end, we compute a tightly-bounded approximation of the solution of an integral appearing in LTCs' dynamics, that has had no known closed-form solution so far.
Ranked #38 on Sentiment Analysis on IMDb
1 code implementation • NeurIPS 2021 • Lucas Liebenwein, Ramin Hasani, Alexander Amini, Daniela Rus
Our empirical results suggest that pruning improves generalization for neural ODEs in generative modeling.
1 code implementation • NeurIPS 2021 • Charles Vorbach, Ramin Hasani, Alexander Amini, Mathias Lechner, Daniela Rus
We evaluate our method in the context of visual-control learning of drones over a series of complex tasks, ranging from short- and long-term navigation, to chasing static and dynamic objects through photorealistic environments.
1 code implementation • 13 Jun 2021 • Zahra Babaiee, Ramin Hasani, Mathias Lechner, Daniela Rus, Radu Grosu
Robustness to variations in lighting conditions is a key objective for any deep vision system.
no code implementations • 15 Mar 2021 • Mathias Lechner, Ramin Hasani, Radu Grosu, Daniela Rus, Thomas A. Henzinger
Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations, with the cost of nominal performance drop.
1 code implementation • 8 Mar 2021 • Axel Brunnbauer, Luigi Berducci, Andreas Brandstätter, Mathias Lechner, Ramin Hasani, Daniela Rus, Radu Grosu
World models learn behaviors in a latent imagination space to enhance the sample-efficiency of deep reinforcement learning (RL) algorithms.
no code implementations • 16 Dec 2020 • Sophie Gruenbacher, Ramin Hasani, Mathias Lechner, Jacek Cyranka, Scott A. Smolka, Radu Grosu
We show that Neural ODEs, an emerging class of time-continuous neural networks, can be verified by solving a set of global-optimization problems.
1 code implementation • 13 Oct 2020 • Mathias Lechner, Ramin Hasani, Alexander Amini, Thomas A. Henzinger, Daniela Rus & Radu Grosu
A central goal of artificial intelligence in high-stakes decision-making applications is to design a single algorithm that simultaneously expresses generalizability by learning coherent representations of their world and interpretable explanations of its dynamics.
2 code implementations • NeurIPS 2020 • Mathias Lechner, Ramin Hasani
These models, however, face difficulties when the input data possess long-term dependencies.
Ranked #10 on Sequential Image Classification on Sequential MNIST
4 code implementations • 8 Jun 2020 • Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, Radu Grosu
We introduce a new class of time-continuous recurrent neural network models.
1 code implementation • 11 Sep 2018 • Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, Radu Grosu
Inspired by the structure of the nervous system of the soil-worm, C. elegans, we introduce Neuronal Circuit Policies (NCPs), defined as the model of biological neural circuits reparameterized for the control of an alternative task.