Search Results for author: Sihui Dai

Found 10 papers, 4 papers with code

Larimar: Large Language Models with Episodic Memory Control

no code implementations18 Mar 2024 Payel Das, Subhajit Chaudhury, Elliot Nelson, Igor Melnyk, Sarath Swaminathan, Sihui Dai, Aurélie Lozano, Georgios Kollias, Vijil Chenthamarakshan, Jiří, Navrátil, Soham Dan, Pin-Yu Chen

Efficient and accurate updating of knowledge stored in Large Language Models (LLMs) is one of the most pressing research challenges today.

PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses

1 code implementation19 Oct 2023 Chong Xiang, Tong Wu, Sihui Dai, Jonathan Petit, Suman Jana, Prateek Mittal

State-of-the-art defenses against adversarial patch attacks can now achieve strong certifiable robustness with a marginal drop in model utility.

MultiRobustBench: Benchmarking Robustness Against Multiple Attacks

no code implementations21 Feb 2023 Sihui Dai, Saeed Mahloujifar, Chong Xiang, Vikash Sehwag, Pin-Yu Chen, Prateek Mittal

Using our framework, we present the first leaderboard, MultiRobustBench, for benchmarking multiattack evaluation which captures performance across attack types and attack strengths.

Benchmarking

Formulating Robustness Against Unforeseen Attacks

1 code implementation28 Apr 2022 Sihui Dai, Saeed Mahloujifar, Prateek Mittal

Based on our generalization bound, we propose variation regularization (VR) which reduces variation of the feature extractor across the source threat model during training.

Parameterizing Activation Functions for Adversarial Robustness

no code implementations11 Oct 2021 Sihui Dai, Saeed Mahloujifar, Prateek Mittal

To address this, we analyze the direct impact of activation shape on robustness through PAFs and observe that activation shapes with positive outputs on negative inputs and with high finite curvature can increase robustness.

Adversarial Robustness

Neural Networks with Recurrent Generative Feedback

1 code implementation NeurIPS 2020 Yujia Huang, James Gornet, Sihui Dai, Zhiding Yu, Tan Nguyen, Doris Y. Tsao, Anima Anandkumar

This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.