no code implementations • 2 Feb 2025 • Jiawen Zhang, KeJia Chen, Lipeng He, Jian Lou, Dan Li, Zunlei Feng, Mingli Song, Jian Liu, Kui Ren, Xiaohu Yang
Large Language Models (LLMs) have showcased remarkable capabilities across various domains.
no code implementations • 2 Feb 2025 • Jiawen Zhang, KeJia Chen, Zunlei Feng, Jian Lou, Mingli Song, Jian Liu, Xiaohu Yang
With the growing popularity of LLMs among the general public users, privacy-preserving and adversarial robustness have become two pressing demands for LLM-based services, which have largely been pursued separately but rarely jointly.
no code implementations • 10 Dec 2024 • Zhenpeng Wu, Jian Lou, Zibin Zheng, Chuan Chen
Large language models (LLMs) have been shown to memorize and reproduce content from their training data, raising significant privacy concerns, especially with web-scale datasets.
no code implementations • 9 Oct 2024 • Junjie Chen, Qian Chen, Jian Lou, XiaoYu Zhang, Kai Wu, Zilong Wang
Machine unlearning (MU) is becoming a promising paradigm to achieve the "right to be forgotten", where the training trace of any chosen data points could be eliminated, while maintaining the model utility on general testing samples after unlearning.
no code implementations • CVPR 2024 • Wen Yin, Jian Lou, Pan Zhou, Yulai Xie, Dan Feng, Yuhua Sun, Tailai Zhang, Lichao Sun
In the digital realm, we evaluate our approach using benchmark datasets for TIOD, achieving an Attack Success Rate (ASR) of up to 98. 21%.
no code implementations • 10 Feb 2024 • Yuecheng Li, Tong Wang, Chuan Chen, Jian Lou, Bin Chen, Lei Yang, Zibin Zheng
This implies that our FedCEO can effectively recover the disrupted semantic information by smoothing the global semantic space for different privacy settings and continuous training processes.
1 code implementation • 29 Jan 2024 • Junxu Liu, Jian Lou, Li Xiong, Jinfei Liu, Xiaofeng Meng
Federated learning (FL) enhanced by differential privacy has emerged as a popular approach to better safeguard the privacy of client-side data by protecting clients' contributions during the training process.
no code implementations • 19 Jan 2024 • Hong kyu Lee, Qiuchen Zhang, Carl Yang, Jian Lou, Li Xiong
Machine unlearning aims to eliminate the influence of a subset of training samples (i. e., unlearning samples) from a trained model.
no code implementations • 24 Dec 2023 • Hanxi Liu, Xiaokai Mao, Haocheng Xia, Jian Lou, Jinfei Liu, Kui Ren
Large language models (LLMs) excel on new tasks without additional training, simply by providing natural language prompts that demonstrate how the task should be performed.
1 code implementation • 18 Dec 2023 • Lanlan Chen, Kai Wu, Jian Lou, Jing Liu
Modeling continuous-time dynamics constitutes a foundational challenge, and uncovering inter-component correlations within complex systems holds promise for enhancing the efficacy of dynamic modeling.
no code implementations • NeurIPS 2023 • Jiaqi Liu, Jian Lou, Zhan Qin, Kui Ren
In addition, our rates of generalization and deletion capacity match the state-of-the-art rates derived previously for standard statistical learning models.
no code implementations • 10 Nov 2023 • Fereshteh Razmi, Jian Lou, Li Xiong
We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs.
no code implementations • 3 Nov 2023 • Yuke Hu, Jian Lou, Jiaqi Liu, Wangze Ni, Feng Lin, Zhan Qin, Kui Ren
However, despite their promising efficiency, almost all existing machine unlearning methods handle unlearning requests independently from inference requests, which unfortunately introduces a new security issue of inference service obsolescence and a privacy vulnerability of undesirable exposure for machine unlearning in MLaaS.
1 code implementation • 19 Oct 2023 • Hongwei Yao, Jian Lou, Zhan Qin
Prompts have significantly improved the performance of pretrained Large Language Models (LLMs) on various downstream tasks recently, making them increasingly indispensable for a diverse range of LLM application scenarios.
1 code implementation • 23 Aug 2023 • Hongwei Yao, Zheng Li, Kunzhe Huang, Jian Lou, Zhan Qin, Kui Ren
After our DNN fingerprint removal attack, the model distance between the target and surrogate models is x100 times higher than that of the baseline attacks, (2) the RemovalNet is efficient.
1 code implementation • 10 Aug 2023 • Yiling He, Jian Lou, Zhan Qin, Kui Ren
Although feature attribution (FA) methods can be used to explain deep learning, the underlying classifier is still blind to what behavior is suspicious, and the generated explanation cannot adapt to downstream tasks, incurring poor explanation fidelity and intelligibility.
no code implementations • 27 May 2023 • Kai Wu, Yujian Betterest Li, Jian Lou, XiaoYu Zhang, Handing Wang, Jing Liu
To address this challenge, this paper focuses on the Rapid Plug-in Defender (RaPiD) problem, aiming to rapidly counter adversarial perturbations without altering the deployed model.
no code implementations • 22 Mar 2023 • Wenjie Wang, Li Xiong, Jian Lou
In this work, we propose adversarial examples in the Wasserstein space for time series data for the first time and utilize Wasserstein distance to bound the perturbation between normal examples and adversarial examples.
no code implementations • 4 Mar 2023 • Xinyi Shang, Gang Huang, Yang Lu, Jian Lou, Bo Han, Yiu-ming Cheung, Hanzi Wang
Federated Semi-Supervised Learning (FSSL) aims to learn a global model from different clients in an environment with both labeled and unlabeled data.
no code implementations • ICCV 2023 • Junxu Liu, Mingsheng Xue, Jian Lou, XiaoYu Zhang, Li Xiong, Zhan Qin
However, existing methods focus exclusively on unlearning from standard training models and do not apply to adversarial training models (ATMs) despite their popularity as effective defenses against adversarial examples.
1 code implementation • ICCV 2023 • Yulin Jin, XiaoYu Zhang, Jian Lou, Xu Ma, Zilong Wang, Xiaofeng Chen
The experimental evaluations manifest the superiority of SAT over other state-of-the-art AT mechanisms in defending against adversarial attacks against both output and intermediate layers.
no code implementations • 3 Nov 2022 • Qiuchen Zhang, Jing Ma, Jian Lou, Li Xiong, Xiaoqian Jiang
PATE combines an ensemble of "teacher models" trained on sensitive data and transfers the knowledge to a "student" model through the noisy aggregation of teachers' votes for labeling unlabeled public data which the student model will be trained on.
1 code implementation • 10 Oct 2022 • Qiuchen Zhang, Hong kyu Lee, Jing Ma, Jian Lou, Carl Yang, Li Xiong
The key idea is to decouple the feature projection and message passing via a DP PageRank algorithm which learns the structure information and uses the top-$K$ neighbors determined by the PageRank for feature aggregation.
no code implementations • 1 Aug 2022 • Yifei Ren, Jian Lou, Li Xiong, Joyce C Ho, Xiaoqian Jiang, Sivasubramanium Bhavani
By supervising the tensor factorization with downstream prediction tasks and leveraging information from multiple related predictive tasks, MULTIPAR can yield not only more meaningful phenotypes but also better predictive performance for downstream tasks.
1 code implementation • 12 Jul 2022 • Yuhua Sun, Tailai Zhang, Xingjun Ma, Pan Zhou, Jian Lou, Zichuan Xu, Xing Di, Yu Cheng, Lichao
In this paper, we propose two novel Density Manipulation Backdoor Attacks (DMBA$^{-}$ and DMBA$^{+}$) to attack the model to produce arbitrarily large or small density estimations.
1 code implementation • 3 Mar 2022 • Yiu-ming Cheung, Juyong Jiang, Feng Yu, Jian Lou
Despite enormous research interest and rapid application of federated learning (FL) to various areas, existing studies mostly focus on supervised federated learning under the horizontally partitioned local dataset setting.
no code implementations • 8 Dec 2021 • Wenbo Gou, Wen Shi, Jian Lou, Lijie Huang, Pan Zhou, Ruixuan Li
Natural language video localization (NLVL) is an important task in the vision-language understanding area, which calls for an in-depth understanding of not only computer vision and natural language side alone, but more importantly the interplay between both sides.
no code implementations • 29 Sep 2021 • Pengfei Tang, Wenjie Wang, Xiaolan Gu, Jian Lou, Li Xiong, Ming Li
To solve this challenge, a reconstruction network is built before the public pre-trained classifiers to offer certified robustness and defend against adversarial examples through input perturbation.
no code implementations • 3 Sep 2021 • Jing Ma, Qiuchen Zhang, Jian Lou, Li Xiong, Sivasubramanium Bhavani, Joyce C. Ho
Tensor factorization has been proved as an efficient unsupervised learning approach for health data analysis, especially for computational phenotyping, where the high-dimensional Electronic Health Records (EHRs) with patients' history of medical procedures, medications, diagnosis, lab tests, etc., are converted to meaningful and interpretable medical concepts.
no code implementations • 22 Aug 2021 • Jing Ma, Qiuchen Zhang, Jian Lou, Li Xiong, Joyce C. Ho
Representation learning on static graph-structured data has shown a significant impact on many real-world applications.
no code implementations • 21 Aug 2021 • Haowen Lin, Jian Lou, Li Xiong, Cyrus Shahabi
Federated learning enables multiple clients, such as mobile phones and organizations, to collaboratively learn a shared model for prediction while protecting local data privacy.
no code implementations • ICCV 2021 • Haowen Lin, Jian Lou, Li Xiong, Cyrus Shahabi
Adversarial data examples have drawn significant attention from the machine learning and security communities.
no code implementations • 18 Jul 2021 • Farnaz Tahmasebian, Jian Lou, Li Xiong
Federated learning is a prominent framework that enables clients (e. g., mobile devices or organizations) to train a collaboratively global model under a central server's orchestration while keeping local training datasets' privacy.
no code implementations • NAACL 2021 • Wenjie Wang, Pengfei Tang, Jian Lou, Li Xiong
The robustness and security of natural language processing (NLP) models are significantly important in real-world applications.
no code implementations • 5 Mar 2021 • Yiming Li, Shan Liu, Yu Chen, Yushan Zheng, Sijia Chen, Bin Zhu, Jian Lou
As the successor of H. 265/HEVC, the new versatile video coding standard (H. 266/VVC) can provide up to 50% bitrate saving with the same subjective quality, at the cost of increased decoding complexity.
no code implementations • 16 Feb 2021 • Jian Jin, Xingxing Zhang, Xin Fu, huan zhang, Weisi Lin, Jian Lou, Yao Zhao
Experimental results on image classification demonstrate that we successfully find the JND for deep machine vision.
no code implementations • 26 Aug 2019 • Jing Ma, Qiuchen Zhang, Jian Lou, Joyce C. Ho, Li Xiong, Xiaoqian Jiang
We propose DPFact, a privacy-preserving collaborative tensor factorization method for computational phenotyping using EHR.
no code implementations • 4 Dec 2018 • Wenwen Li, Jian Lou, Shuo Zhou, Haiping Lu
While functional magnetic resonance imaging (fMRI) is important for healthcare/neuroscience applications, it is challenging to classify or interpret due to its multi-dimensional structure, high dimensionality, and small number of samples available.
no code implementations • 28 May 2015 • Jian Lou, Andrew M. Smith, Yevgeniy Vorobeychik
Unlike most prior analysis, we focus on the situations in which each defender must protect multiple targets, so that even a single defender's best response decision is, in general, highly non-trivial.