Search Results for author: Nilaksh Das

Found 21 papers, 9 papers with code

SpeechGuard: Exploring the Adversarial Robustness of Multimodal Large Language Models

no code implementations14 May 2024 Raghuveer Peri, Sai Muralidhar Jayanthi, Srikanth Ronanki, Anshu Bhatia, Karel Mundnich, Saket Dingliwal, Nilaksh Das, Zejiang Hou, Goeric Huybrechts, Srikanth Vishnubhotla, Daniel Garcia-Romero, Sundararajan Srinivasan, Kyu J Han, Katrin Kirchhoff

Despite safety guardrails, experiments on jailbreaking demonstrate the vulnerability of SLMs to adversarial perturbations and transfer attacks, with average attack success rates of 90% and 10% respectively when evaluated on a dataset of carefully designed harmful questions spanning 12 different toxic categories.

Adversarial Robustness Instruction Following +1

SpeechVerse: A Large-scale Generalizable Audio Language Model

no code implementations14 May 2024 Nilaksh Das, Saket Dingliwal, Srikanth Ronanki, Rohit Paturi, Zhaocheng Huang, Prashant Mathur, Jie Yuan, Dhanush Bekal, Xing Niu, Sai Muralidhar Jayanthi, Xilai Li, Karel Mundnich, Monica Sunkara, Sundararajan Srinivasan, Kyu J Han, Katrin Kirchhoff

The models are instruction finetuned using continuous latent representations extracted from the speech foundation model to achieve optimal zero-shot performance on a diverse range of speech processing tasks using natural language instructions.

Automatic Speech Recognition Benchmarking +4

Mask The Bias: Improving Domain-Adaptive Generalization of CTC-based ASR with Internal Language Model Estimation

no code implementations5 May 2023 Nilaksh Das, Monica Sunkara, Sravan Bodapati, Jinglun Cai, Devang Kulshreshtha, Jeff Farris, Katrin Kirchhoff

Internal language model estimation (ILME) has been proposed to mitigate this bias for autoregressive models such as attention-based encoder-decoder and RNN-T.

Decoder Domain Adaptation +1

NeuroMapper: In-browser Visualizer for Neural Network Training

1 code implementation22 Oct 2022 Zhiyan Zhou, Kevin Li, Haekyu Park, Megan Dass, Austin Wright, Nilaksh Das, Duen Horng Chau

We present our ongoing work NeuroMapper, an in-browser visualization tool that helps machine learning (ML) developers interpret the evolution of a model during training, providing a new way to monitor the training process and visually discover reasons for suboptimal training.

Dimensionality Reduction

Hear No Evil: Towards Adversarial Robustness of Automatic Speech Recognition via Multi-Task Learning

no code implementations5 Apr 2022 Nilaksh Das, Duen Horng Chau

In this work, we investigate the impact of performing such multi-task learning on the adversarial robustness of ASR models in the speech domain.

Adversarial Attack Adversarial Robustness +5

SkeleVision: Towards Adversarial Resiliency of Person Tracking with Multi-Task Learning

1 code implementation2 Apr 2022 Nilaksh Das, Sheng-Yun Peng, Duen Horng Chau

Person tracking using computer vision techniques has wide ranging applications such as autonomous driving, home security and sports analytics.

Adversarial Robustness Autonomous Driving +3

Concept Evolution in Deep Learning Training: A Unified Interpretation Framework and Discoveries

no code implementations30 Mar 2022 Haekyu Park, Seongmin Lee, Benjamin Hoover, Austin P. Wright, Omar Shaikh, Rahul Duggal, Nilaksh Das, Kevin Li, Judy Hoffman, Duen Horng Chau

We present ConceptEvo, a unified interpretation framework for deep neural networks (DNNs) that reveals the inception and evolution of learned concepts during training.

Decision Making

DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors

1 code implementation CVPR 2022 Sivapriya Vellaichamy, Matthew Hull, Zijie J. Wang, Nilaksh Das, Shengyun Peng, Haekyu Park, Duen Horng (Polo) Chau

With deep learning based systems performing exceedingly well in many vision-related tasks, a major concern with their widespread deployment especially in safety-critical applications is their susceptibility to adversarial attacks.

Object object-detection +2

NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks

1 code implementation29 Aug 2021 Haekyu Park, Nilaksh Das, Rahul Duggal, Austin P. Wright, Omar Shaikh, Fred Hohman, Duen Horng Chau

Through a large-scale human evaluation, we demonstrate that our technique discovers neuron groups that represent coherent, human-meaningful concepts.

Semantic Similarity Semantic Textual Similarity

EnergyVis: Interactively Tracking and Exploring Energy Consumption for ML Models

2 code implementations30 Mar 2021 Omar Shaikh, Jon Saad-Falcon, Austin P Wright, Nilaksh Das, Scott Freitas, Omar Isaac Asensio, Duen Horng Chau

The advent of larger machine learning (ML) models have improved state-of-the-art (SOTA) performance in various modeling tasks, ranging from computer vision to natural language.

SkeletonVis: Interactive Visualization for Understanding Adversarial Attacks on Human Action Recognition Models

no code implementations26 Jan 2021 Haekyu Park, Zijie J. Wang, Nilaksh Das, Anindya S. Paul, Pruthvi Perumalla, Zhiyan Zhou, Duen Horng Chau

Skeleton-based human action recognition technologies are increasingly used in video based applications, such as home robotics, healthcare on aging population, and surveillance.

Action Recognition Temporal Action Localization

CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization

5 code implementations30 Apr 2020 Zijie J. Wang, Robert Turko, Omar Shaikh, Haekyu Park, Nilaksh Das, Fred Hohman, Minsuk Kahng, Duen Horng Chau

Deep learning's great success motivates many practitioners and students to learn about this exciting technology.

Massif: Interactive Interpretation of Adversarial Attacks on Deep Learning

no code implementations21 Jan 2020 Nilaksh Das, Haekyu Park, Zijie J. Wang, Fred Hohman, Robert Firstman, Emily Rogers, Duen Horng Chau

Deep neural networks (DNNs) are increasingly powering high-stakes applications such as autonomous cars and healthcare; however, DNNs are often treated as "black boxes" in such applications.

Adversarial Attack

CNN 101: Interactive Visual Learning for Convolutional Neural Networks

no code implementations7 Jan 2020 Zijie J. Wang, Robert Turko, Omar Shaikh, Haekyu Park, Nilaksh Das, Fred Hohman, Minsuk Kahng, Duen Horng Chau

The success of deep learning solving previously-thought hard problems has inspired many non-experts to learn and understand this exciting technology.

GOGGLES: Automatic Image Labeling with Affinity Coding

1 code implementation11 Mar 2019 Nilaksh Das, Sanya Chaba, Renzhi Wu, Sakshi Gandhi, Duen Horng Chau, Xu Chu

We build the GOGGLES system that implements affinity coding for labeling image datasets by designing a novel set of reusable affinity functions for images, and propose a novel hierarchical generative model for class inference using a small development set.

Few-Shot Learning

The Efficacy of SHIELD under Different Threat Models

no code implementations1 Feb 2019 Cory Cornelius, Nilaksh Das, Shang-Tse Chen, Li Chen, Michael E. Kounavis, Duen Horng Chau

To evaluate the robustness of the defense against an adaptive attacker, we consider the targeted-attack success rate of the Projected Gradient Descent (PGD) attack, which is a strong gradient-based adversarial attack proposed in adversarial machine learning research.

Adversarial Attack Image Classification

ADAGIO: Interactive Experimentation with Adversarial Attack and Defense for Audio

no code implementations30 May 2018 Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Li Chen, Michael E. Kounavis, Duen Horng Chau

Adversarial machine learning research has recently demonstrated the feasibility to confuse automatic speech recognition (ASR) models by introducing acoustically imperceptible perturbations to audio samples.

Adversarial Attack Audio Compression +3

Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression

3 code implementations19 Feb 2018 Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Siwei Li, Li Chen, Michael E. Kounavis, Duen Horng Chau

The rapidly growing body of research in adversarial machine learning has demonstrated that deep neural networks (DNNs) are highly vulnerable to adversarially generated images.

Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression

no code implementations8 May 2017 Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, Michael E. Kounavis, Duen Horng Chau

Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition.

Cannot find the paper you are looking for? You can Submit a new open access paper.