Search Results for author: Shaochen Zhong

Found 7 papers, 4 papers with code

LoRA-as-an-Attack! Piercing LLM Safety Under The Share-and-Play Scenario

no code implementations29 Feb 2024 Hongyi Liu, Zirui Liu, Ruixiang Tang, Jiayi Yuan, Shaochen Zhong, Yu-Neng Chuang, Li Li, Rui Chen, Xia Hu

Our aim is to raise awareness of the potential risks under the emerging share-and-play scenario, so as to proactively prevent potential consequences caused by LoRA-as-an-Attack.

LETA: Learning Transferable Attribution for Generic Vision Explainer

no code implementations23 Dec 2023 Guanchu Wang, Yu-Neng Chuang, Fan Yang, Mengnan Du, Chia-Yuan Chang, Shaochen Zhong, Zirui Liu, Zhaozhuo Xu, Kaixiong Zhou, Xuanting Cai, Xia Hu

To address this problem, we develop a pre-trained, DNN-based, generic explainer on large-scale image datasets, and leverage its transferability to explain various vision models for downstream tasks.

Editable Graph Neural Network for Node Classifications

no code implementations24 May 2023 Zirui Liu, Zhimeng Jiang, Shaochen Zhong, Kaixiong Zhou, Li Li, Rui Chen, Soo-Hyun Choi, Xia Hu

However, model editing for graph neural networks (GNNs) is rarely explored, despite GNNs' widespread applicability.

Fake News Detection Model Editing

Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model

1 code implementation NeurIPS 2023 Zirui Liu, Guanchu Wang, Shaochen Zhong, Zhaozhuo Xu, Daochen Zha, Ruixiang Tang, Zhimeng Jiang, Kaixiong Zhou, Vipin Chaudhary, Shuai Xu, Xia Hu

While the model parameters do contribute to memory usage, the primary memory bottleneck during training arises from storing feature maps, also known as activations, as they are crucial for gradient calculation.

Language Modelling Stochastic Optimization

Data-centric Artificial Intelligence: A Survey

10 code implementations17 Mar 2023 Daochen Zha, Zaid Pervaiz Bhat, Kwei-Herng Lai, Fan Yang, Zhimeng Jiang, Shaochen Zhong, Xia Hu

Artificial Intelligence (AI) is making a profound impact in almost every domain.

Revisit Kernel Pruning with Lottery Regulated Grouped Convolutions

1 code implementation ICLR 2022 Shaochen Zhong, Guanqun Zhang, Ningjia Huang, Shuai Xu

In this paper, we revisit the idea of kernel pruning (to only prune one or several $k \times k$ kernels out of a 3D-filter), a heavily overlooked approach under the context of structured pruning due to it will naturally introduce sparsity to filters within the same convolutional layer—thus, making the remaining network no longer dense.

Clustering Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.