Search Results for author: Akhilan Boopathy

Found 11 papers, 7 papers with code

Resampling-free Particle Filters in High-dimensions

no code implementations21 Apr 2024 Akhilan Boopathy, Aneesh Muppidi, Peggy Yang, Abhiram Iyer, William Yue, Ila Fiete

State estimation is crucial for the performance and safety of numerous robotic applications.

Neuro-Inspired Fragmentation and Recall to Overcome Catastrophic Forgetting in Curiosity

1 code implementation26 Oct 2023 Jaedong Hwang, Zhang-Wei Hong, Eric Chen, Akhilan Boopathy, Pulkit Agrawal, Ila Fiete

Deep reinforcement learning methods exhibit impressive performance on a range of tasks but still struggle on hard exploration tasks in large environments with sparse rewards.

Grid Cell-Inspired Fragmentation and Recall for Efficient Map Building

1 code implementation11 Jul 2023 Jaedong Hwang, Zhang-Wei Hong, Eric Chen, Akhilan Boopathy, Pulkit Agrawal, Ila Fiete

Agents build and use a local map to predict their observations; high surprisal leads to a "fragmentation event" that truncates the local map.

Clustering Navigate +1

Model-agnostic Measure of Generalization Difficulty

1 code implementation1 May 2023 Akhilan Boopathy, Kevin Liu, Jaedong Hwang, Shu Ge, Asaad Mohammedsaleh, Ila Fiete

The measure of a machine learning algorithm is the difficulty of the tasks it can perform, and sufficiently difficult tasks are critical drivers of strong machine learning models.

Inductive Bias Meta-Learning

Double Descent Demystified: Identifying, Interpreting & Ablating the Sources of a Deep Learning Puzzle

1 code implementation24 Mar 2023 Rylan Schaeffer, Mikail Khona, Zachary Robertson, Akhilan Boopathy, Kateryna Pistunova, Jason W. Rocks, Ila Rani Fiete, Oluwasanmi Koyejo

Double descent is a surprising phenomenon in machine learning, in which as the number of model parameters grows relative to the number of data, test error drops as models grow ever larger into the highly overparameterized (data undersampled) regime.

Learning Theory regression

How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective

1 code implementation15 Jun 2021 Akhilan Boopathy, Ila Fiete

Recent works have examined theoretical and empirical properties of wide neural networks trained in the Neural Tangent Kernel (NTK) regime.

Fast Training of Provably Robust Neural Networks by SingleProp

no code implementations1 Feb 2021 Akhilan Boopathy, Tsui-Wei Weng, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, Luca Daniel

Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees.

Proper Network Interpretability Helps Adversarial Robustness in Classification

1 code implementation ICML 2020 Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, Luca Daniel

Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability (namely, making network interpretation maps visually similar), or interpretability is itself susceptible to adversarial attacks.

Adversarial Robustness Classification +3

Efficient Training of Robust and Verifiable Neural Networks

no code implementations25 Sep 2019 Akhilan Boopathy, Lily Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel

We propose that many common certified defenses can be viewed under a unified framework of regularization.

Visual Interpretability Alone Helps Adversarial Robustness

no code implementations25 Sep 2019 Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Pin-Yu Chen, Shiyu Chang, Luca Daniel

Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability, and interpretability is itself susceptible to adversarial attacks.

Adversarial Robustness

CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks

2 code implementations29 Nov 2018 Akhilan Boopathy, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

This motivates us to propose a general and efficient framework, CNN-Cert, that is capable of certifying robustness on general convolutional neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.