1 code implementation • 28 Apr 2024 • Jinghan Jia, Yihua Zhang, Yimeng Zhang, Jiancheng Liu, Bharat Runwal, James Diffenderfer, Bhavya Kailkhura, Sijia Liu
Large Language Models (LLMs) have highlighted the necessity of effective unlearning mechanisms to comply with data regulations and ethical AI practices.
1 code implementation • 2 Feb 2024 • Bharat Runwal, Tejaswini Pedapati, Pin-Yu Chen
Building upon this insight, in this work, we propose a novel density loss that encourages higher activation sparsity (equivalently, lower activation density) in the pre-trained models.
1 code implementation • 29 Aug 2023 • Diganta Misra, Muawiz Chaudhary, Agam Goyal, Bharat Runwal, Pin Yu Chen
This empirical investigation underscores the need for a nuanced understanding beyond mere accuracy in sparse and quantized settings, thereby paving the way for further exploration in Visual Prompting techniques tailored for sparse and quantized models.
1 code implementation • 3 Aug 2022 • Bharat Runwal, Vivek, Sandeep Kumar
For demonstration, the experiments are conducted with Graph convolutional neural network(GCNN) architecture, however, the proposed framework is easily amenable to any existing GNN architecture.
1 code implementation • 4 Apr 2022 • Diganta Misra, Bharat Runwal, Tianlong Chen, Zhangyang Wang, Irina Rish
With the latest advances in deep learning, there has been a lot of focus on the online learning paradigm due to its relevance in practical settings.