Search Results for author: Noel Loo

Found 8 papers, 4 papers with code

Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust Closed-Loop Control

no code implementations5 Oct 2023 Neehal Tumma, Mathias Lechner, Noel Loo, Ramin Hasani, Daniela Rus

In this work, we explore the application of recurrent neural networks to tasks of this nature and understand how a parameterization of their recurrent connectivity influences robustness in closed-loop settings.

On the Size and Approximation Error of Distilled Sets

no code implementations23 May 2023 Alaa Maalouf, Murad Tukan, Noel Loo, Ramin Hasani, Mathias Lechner, Daniela Rus

Despite significant empirical progress in recent years, there is little understanding of the theoretical limitations/guarantees of dataset distillation, specifically, what excess risk is achieved by distillation compared to the original dataset, and how large are distilled datasets?

regression

Dataset Distillation with Convexified Implicit Gradients

2 code implementations13 Feb 2023 Noel Loo, Ramin Hasani, Mathias Lechner, Daniela Rus

We propose a new dataset distillation algorithm using reparameterization and convexification of implicit gradients (RCIG), that substantially improves the state-of-the-art.

Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation

1 code implementation2 Feb 2023 Noel Loo, Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus

We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset, and that these reconstruction attacks can be used for \textit{dataset distillation}, that is, we can retrain on reconstructed images and obtain high predictive accuracy.

Reconstruction Attack

Efficient Dataset Distillation Using Random Feature Approximation

2 code implementations21 Oct 2022 Noel Loo, Ramin Hasani, Alexander Amini, Daniela Rus

Dataset distillation compresses large datasets into smaller synthetic coresets which retain performance with the aim of reducing the storage and computational burden of processing the entire dataset.

Dataset Condensation regression

Generalized Variational Continual Learning

no code implementations ICLR 2021 Noel Loo, Siddharth Swaroop, Richard E. Turner

One strand of research has used probabilistic regularization for continual learning, with two of the main approaches in this vein being Online Elastic Weight Consolidation (Online EWC) and Variational Continual Learning (VCL).

Continual Learning Variational Inference

Combining Variational Continual Learning with FiLM Layers

no code implementations ICML Workshop LifelongML 2020 Noel Loo, Siddharth Swaroop, Richard E Turner

The standard architecture for continual learning is a multi-headed neural network, which has shared body parameters and task-specific heads.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.