no code implementations • 5 Oct 2023 • Neehal Tumma, Mathias Lechner, Noel Loo, Ramin Hasani, Daniela Rus
In this work, we explore the application of recurrent neural networks to tasks of this nature and understand how a parameterization of their recurrent connectivity influences robustness in closed-loop settings.
no code implementations • 23 May 2023 • Alaa Maalouf, Murad Tukan, Noel Loo, Ramin Hasani, Mathias Lechner, Daniela Rus
Despite significant empirical progress in recent years, there is little understanding of the theoretical limitations/guarantees of dataset distillation, specifically, what excess risk is achieved by distillation compared to the original dataset, and how large are distilled datasets?
2 code implementations • 13 Feb 2023 • Noel Loo, Ramin Hasani, Mathias Lechner, Daniela Rus
We propose a new dataset distillation algorithm using reparameterization and convexification of implicit gradients (RCIG), that substantially improves the state-of-the-art.
1 code implementation • 2 Feb 2023 • Noel Loo, Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus
We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset, and that these reconstruction attacks can be used for \textit{dataset distillation}, that is, we can retrain on reconstructed images and obtain high predictive accuracy.
2 code implementations • 21 Oct 2022 • Noel Loo, Ramin Hasani, Alexander Amini, Daniela Rus
Dataset distillation compresses large datasets into smaller synthetic coresets which retain performance with the aim of reducing the storage and computational burden of processing the entire dataset.
1 code implementation • 21 Oct 2022 • Noel Loo, Ramin Hasani, Alexander Amini, Daniela Rus
In this limit, the kernel is frozen, and the underlying feature map is fixed.
no code implementations • ICLR 2021 • Noel Loo, Siddharth Swaroop, Richard E. Turner
One strand of research has used probabilistic regularization for continual learning, with two of the main approaches in this vein being Online Elastic Weight Consolidation (Online EWC) and Variational Continual Learning (VCL).
no code implementations • ICML Workshop LifelongML 2020 • Noel Loo, Siddharth Swaroop, Richard E Turner
The standard architecture for continual learning is a multi-headed neural network, which has shared body parameters and task-specific heads.