Search Results for author: Nikhil Madaan

Found 3 papers, 2 papers with code

Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning

no code implementations17 Oct 2023 Taejin Kim, Jiarui Li, Shubhranshu Singh, Nikhil Madaan, Carlee Joe-Wong

Our research, initially spurred by test-time evasion attacks, investigates the intersection of adversarial training and backdoor attacks within federated learning, introducing Adversarial Robustness Unhardening (ARU).

Adversarial Robustness Federated Learning

LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language Model as an Agent

1 code implementation21 Sep 2023 Jianing Yang, Xuweiyi Chen, Shengyi Qian, Nikhil Madaan, Madhavan Iyengar, David F. Fouhey, Joyce Chai

While existing approaches often rely on extensive labeled data or exhibit limitations in handling complex language queries, we propose LLM-Grounder, a novel zero-shot, open-vocabulary, Large Language Model (LLM)-based 3D visual grounding pipeline.

Language Modelling Large Language Model +3

Characterizing Internal Evasion Attacks in Federated Learning

1 code implementation17 Sep 2022 Taejin Kim, Shubhranshu Singh, Nikhil Madaan, Carlee Joe-Wong

However, combining adversarial training with personalized federated learning frameworks increases relative internal attack robustness by 60% compared to federated adversarial training and performs well under limited system resources.

Adversarial Robustness Personalized Federated Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.