no code implementations • 22 Oct 2024 • Linkang Du, Xuanru Zhou, Min Chen, Chusong Zhang, Zhou Su, Peng Cheng, Jiming Chen, Zhikun Zhang
As the implementation of machine learning (ML) systems becomes more widespread, especially with the introduction of larger ML models, we perceive a spring demand for massive data.
1 code implementation • 3 Sep 2024 • Bokang Zhang, Yanglin Zhang, Zhikun Zhang, Jinglan Yang, Lingying Huang, Junfeng Wu
Neural Radiance Fields (NeRF) have revolutionized 3D computer vision and graphics, facilitating novel view synthesis and influencing sectors like extended reality and e-commerce.
no code implementations • 12 Aug 2024 • Yi Zhang, Zhikun Zhang, Xiangjun Wang
Traditional partial differential equations with constant coefficients often struggle to capture abrupt changes in real-world phenomena, leading to the development of variable coefficient PDEs and Markovian switching models.
no code implementations • 5 Jul 2024 • Zhikun Zhang, Yiting Duan, Xiangjun Wang, Mingyuan Zhang
This paper proposes a novel fast online methodology for outlier detection called the exception maximization outlier detection method(EMODM), which employs probabilistic models and statistical algorithms to detect abnormal patterns from the outputs of complex systems.
no code implementations • 5 Mar 2024 • ZiHao Wang, Rui Zhu, Dongruo Zhou, Zhikun Zhang, John Mitchell, Haixu Tang, XiaoFeng Wang
DPAdapter modifies and enhances the sharpness-aware minimization (SAM) technique, utilizing a two-batch strategy to provide a more accurate perturbation estimate and an efficient gradient descent, thereby improving parameter robustness against noise.
1 code implementation • 10 Jan 2024 • Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao liu, Heng Ji, Hongyi Wang, huan zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao
This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions.
no code implementations • 18 Dec 2023 • Yiting Qu, Zhikun Zhang, Yun Shen, Michael Backes, Yang Zhang
Take the open-world attribution as an example, FAKEPCD attributes point clouds to known sources with an accuracy of 0. 82-0. 98 and to unknown sources with an accuracy of 0. 73-1. 00.
1 code implementation • 24 Oct 2023 • Xiaoyi Chen, Siyuan Tang, Rui Zhu, Shijun Yan, Lei Jin, ZiHao Wang, Liya Su, Zhikun Zhang, XiaoFeng Wang, Haixu Tang
In our research, we propose a novel attack, Janus, which exploits the fine-tuning interface to recover forgotten PIIs from the pre-training data in LLMs.
1 code implementation • 6 Sep 2023 • Linkang Du, Min Chen, Mingyang Sun, Shouling Ji, Peng Cheng, Jiming Chen, Zhikun Zhang
In safety-critical domains such as autonomous vehicles, offline deep reinforcement learning (offline DRL) is frequently used to train models on pre-collected datasets, as opposed to training these models by interacting with the real-world environment as the online DRL.
1 code implementation • 26 Aug 2023 • Chengkun Wei, Wenlong Meng, Zhikun Zhang, Min Chen, Minghu Zhao, Wenjing Fang, Lei Wang, Zihui Zhang, Wenzhi Chen
Instead of directly inverting the triggers, LMSanitator aims to invert the predefined attack vectors (pretrained models' output when the input is embedded with triggers) of the task-agnostic backdoors, which achieves much better convergence performance and backdoor detection accuracy.
no code implementations • 30 Jun 2023 • Guangtao Zhang, Yiting Duan, Guanyu Pan, Qijing Chen, Huiyu Yang, Zhikun Zhang
To comprehend complex systems with multiple states, it is imperative to reveal the identity of these states by system outputs.
1 code implementation • 13 Jun 2023 • Yihan Ma, Zhikun Zhang, Ning Yu, Xinlei He, Michael Backes, Yun Shen, Yang Zhang
Graph generative models become increasingly effective for data distribution approximation and data augmentation.
1 code implementation • 10 May 2023 • Chengkun Wei, Minghu Zhao, Zhikun Zhang, Min Chen, Wenlong Meng, Bo Liu, Yuan Fan, Wenzhi Chen
We also explore some improvements that can maintain model utility and defend against MIAs more effectively.
2 code implementations • 5 Apr 2023 • Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Yang Zhang
Few-shot-based facial recognition systems have gained increasing attention due to their scalability and ability to work with a few face images during the model deployment phase.
1 code implementation • 4 Sep 2022 • Hai Huang, Zhikun Zhang, Yun Shen, Michael Backes, Qi Li, Yang Zhang
Existing studies on neural architecture search (NAS) mainly focus on efficiently and effectively searching for network architectures with better performance.
no code implementations • 14 Apr 2022 • Yun Shen, Yufei Han, Zhikun Zhang, Min Chen, Ting Yu, Michael Backes, Yang Zhang, Gianluca Stringhini
Previous security research efforts orbiting around graphs have been exclusively focusing on either (de-)anonymizing the graphs or understanding the security and privacy issues of graph neural networks.
1 code implementation • 6 Oct 2021 • Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, Yang Zhang
Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph.
1 code implementation • 27 Mar 2021 • Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang
In this paper, we propose GraphEraser, a novel machine unlearning framework tailored to graph data.
1 code implementation • 4 Feb 2021 • Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, Yang Zhang
As a result, we lack a comprehensive picture of the risks caused by the attacks, e. g., the different scenarios they can be applied to, the common factors that influence their performance, the relationship among them, or the effectiveness of possible defenses.
no code implementations • 10 Sep 2020 • Yang Zou, Zhikun Zhang, Michael Backes, Yang Zhang
One major privacy attack in this domain is membership inference, where an adversary aims to determine whether a target data sample is part of the training set of a target ML model.
1 code implementation • 24 May 2020 • Tianhao Wang, Joann Qiongna Chen, Zhikun Zhang, Dong Su, Yueqiang Cheng, Zhou Li, Ninghui Li, Somesh Jha
To our knowledge, this is the first LDP algorithm for publishing streaming data.
1 code implementation • 5 May 2020 • Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang
More importantly, we show that our attack in multiple cases outperforms the classical membership inference attack on the original ML model, which indicates that machine unlearning can have counterproductive effects on privacy.