no code implementations • 19 Feb 2024 • Shuowei Jin, Yongji Wu, Haizhong Zheng, Qingzhao Zhang, Matthew Lentz, Z. Morley Mao, Atul Prakash, Feng Qian, Danyang Zhuo
Large language models (LLMs) have seen significant adoption for natural language tasks, owing their success to massive numbers of model parameters (e. g., 70B+); however, LLM inference incurs significant computation and memory costs.
no code implementations • 9 Feb 2024 • Haizhong Zheng, Xiaoyan Bai, Beidi Chen, Fan Lai, Atul Prakash
The emergence of activation sparsity in LLMs provides a natural approach to reduce this cost by involving only parts of the parameters for inference.
no code implementations • 11 Oct 2023 • Haizhong Zheng, Jiachen Sun, Shutong Wu, Bhavya Kailkhura, Zhuoqing Mao, Chaowei Xiao, Atul Prakash
In this paper, we recognize that images share common features in a hierarchical way due to the inherent hierarchical structure of the classification system, which is overlooked by current data parameterization methods.
no code implementations • 1 Jun 2023 • Jiachen Sun, Haizhong Zheng, Qingzhao Zhang, Atul Prakash, Z. Morley Mao, Chaowei Xiao
CALICO's efficacy is substantiated by extensive evaluations on 3D object detection and BEV map segmentation tasks, where it delivers significant performance improvements.
1 code implementation • 28 Oct 2022 • Haizhong Zheng, Rui Liu, Fan Lai, Atul Prakash
We then propose a novel one-shot coreset selection method, Coverage-centric Coreset Selection (CCS), that jointly considers overall data coverage upon a distribution as well as the importance of each example.
no code implementations • 17 Jul 2020 • Haizhong Zheng, Ziqi Zhang, Honglak Lee, Atul Prakash
Moreover, we design the first diagnostic method to quantify the vulnerability contributed by each layer, which can be used to identify vulnerable parts of model architectures.
2 code implementations • CVPR 2020 • Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, Atul Prakash
Adversarial training is an effective defense method to protect classification models against adversarial attacks.
no code implementations • 27 May 2019 • Haizhong Zheng, Earlence Fernandes, Atul Prakash
Recently, interpretable models called self-explaining models (SEMs) have been proposed with the goal of providing interpretability robustness.
no code implementations • 26 May 2019 • Kevin Eykholt, Swati Gupta, Atul Prakash, Amir Rahmati, Pratik Vaishnavi, Haizhong Zheng
Existing deep neural networks, say for image classification, have been shown to be vulnerable to adversarial images that can cause a DNN misclassification, without any perceptible change to an image.