no code implementations • 9 Sep 2024 • Youngeun Kim, Jun Fang, Qin Zhang, Zhaowei Cai, Yantao Shen, Rahul Duggal, Dripta S. Raychaudhuri, Zhuowen Tu, Yifan Xing, Onkar Dabeer
Our DPaRL learns to generate dynamic prompts for inference, as opposed to relying on a static prompt pool in previous PCL methods.
1 code implementation • 30 Aug 2023 • Shengyun Peng, Weilin Xu, Cory Cornelius, Matthew Hull, Kevin Li, Rahul Duggal, Mansi Phute, Jason Martin, Duen Horng Chau
Our research aims to unify existing works' diverging opinions on how architectural components affect the adversarial robustness of CNNs.
1 code implementation • 8 Jan 2023 • Shengyun Peng, Weilin Xu, Cory Cornelius, Kevin Li, Rahul Duggal, Duen Horng Chau, Jason Martin
Adversarial Training is the most effective approach for improving the robustness of Deep Neural Networks (DNNs).
no code implementations • 30 Sep 2022 • Rahul Duggal, Shengyun Peng, Hao Zhou, Duen Horng Chau
In this paper, we propose a new and complementary direction for improving performance on long tailed datasets - optimizing the backbone architecture through neural architecture search (NAS).
no code implementations • 27 Sep 2022 • Rahul Duggal, Hao Zhou, Shuo Yang, Jun Fang, Yuanjun Xiong, Wei Xia
With the shift towards on-device deep learning, ensuring a consistent behavior of an AI service across diverse compute platforms becomes tremendously important.
no code implementations • 30 Mar 2022 • Haekyu Park, Seongmin Lee, Benjamin Hoover, Austin P. Wright, Omar Shaikh, Rahul Duggal, Nilaksh Das, Kevin Li, Judy Hoffman, Duen Horng Chau
We present ConceptEvo, a unified interpretation framework for deep neural networks (DNNs) that reveals the inception and evolution of learned concepts during training.
1 code implementation • 29 Aug 2021 • Haekyu Park, Nilaksh Das, Rahul Duggal, Austin P. Wright, Omar Shaikh, Fred Hohman, Duen Horng Chau
Through a large-scale human evaluation, we demonstrate that our technique discovers neuron groups that represent coherent, human-meaningful concepts.
no code implementations • CVPR 2021 • Rahul Duggal, Hao Zhou, Shuo Yang, Yuanjun Xiong, Wei Xia, Zhuowen Tu, Stefano Soatto
Existing systems use the same embedding model to compute representations (embeddings) for the query and gallery images.
1 code implementation • 31 Jan 2021 • Scott Freitas, Rahul Duggal, Duen Horng Chau
Computer vision is playing an increasingly important role in automated malware detection with the rise of the image-based binary representation.
no code implementations • 22 Jun 2020 • Rahul Duggal, Scott Freitas, Sunny Dhamnani, Duen Horng Chau, Jimeng Sun
The natural world often follows a long-tailed data distribution where only a few classes account for most of the examples.
Ranked #40 on
Long-tail Learning
on CIFAR-10-LT (ρ=10)
1 code implementation • 29 Jan 2020 • Rahul Duggal, Scott Freitas, Cao Xiao, Duen Horng Chau, Jimeng Sun
By deploying these models to an Android application on a smartphone, we quantitatively observe that REST allows models to achieve up to 17x energy reduction and 9x faster inference.
1 code implementation • 19 Nov 2019 • Rahul Duggal, Cao Xiao, Richard Vuduc, Jimeng Sun
With CUP, we overcome two limitations of prior work-(1) non-uniform pruning: CUP can efficiently determine the ideal number of filters to prune in each layer of a neural network.
3 code implementations • 18 Feb 2019 • Max Allan, Alex Shvets, Thomas Kurmann, Zichen Zhang, Rahul Duggal, Yun-Hsuan Su, Nicola Rieke, Iro Laina, Niveditha Kalavakonda, Sebastian Bodenstedt, Luis Herrera, Wenqi Li, Vladimir Iglovikov, Huoling Luo, Jian Yang, Danail Stoyanov, Lena Maier-Hein, Stefanie Speidel, Mahdi Azizian
In mainstream computer vision and machine learning, public datasets such as ImageNet, COCO and KITTI have helped drive enormous improvements by enabling researchers to understand the strengths and limitations of different algorithms via performance comparison.
no code implementations • 15 Dec 2016 • Naushad Ansari, Anubha Gupta, Rahul Duggal
The loss function of the convolutional neural network is setup with total squared error between the given input image to CNN and the reconstructed image at the output of CNN, leading to perfect reconstruction at the end of train- ing.