Search Results for author: Jian Lou

Found 34 papers, 8 papers with code

Clients Collaborate: Flexible Differentially Private Federated Learning with Guaranteed Improvement of Utility-Privacy Trade-off

no code implementations10 Feb 2024 Yuecheng Li, Tong Wang, Chuan Chen, Jian Lou, Bin Chen, Lei Yang, Zibin Zheng

This implies that our FedCEO can effectively recover the disrupted semantic information by smoothing the global semantic space for different privacy settings and continuous training processes.

Federated Learning

Cross-silo Federated Learning with Record-level Personalized Differential Privacy

no code implementations29 Jan 2024 Junxu Liu, Jian Lou, Li Xiong, Jinfei Liu, Xiaofeng Meng

Federated learning enhanced by differential privacy has emerged as a popular approach to better safeguard the privacy of client-side data by protecting clients' contributions during the training process.

Federated Learning

Contrastive Unlearning: A Contrastive Approach to Machine Unlearning

no code implementations19 Jan 2024 Hong kyu Lee, Qiuchen Zhang, Carl Yang, Jian Lou, Li Xiong

Machine unlearning aims to eliminate the influence of a subset of training samples (i. e., unlearning samples) from a trained model.

Machine Unlearning Representation Learning

Prompt Valuation Based on Shapley Values

no code implementations24 Dec 2023 Hanxi Liu, Xiaokai Mao, Haocheng Xia, Jian Lou, Jinfei Liu

Large language models (LLMs) excel on new tasks without additional training, simply by providing natural language prompts that demonstrate how the task should be performed.

Signed Graph Neural Ordinary Differential Equation for Modeling Continuous-time Dynamics

1 code implementation18 Dec 2023 Lanlan Chen, Kai Wu, Jian Lou, Jing Liu

Modeling continuous-time dynamics constitutes a foundational challenge, and uncovering inter-component correlations within complex systems holds promise for enhancing the efficacy of dynamic modeling.

Certified Minimax Unlearning with Generalization Rates and Deletion Capacity

no code implementations NeurIPS 2023 Jiaqi Liu, Jian Lou, Zhan Qin, Kui Ren

In addition, our rates of generalization and deletion capacity match the state-of-the-art rates derived previously for standard statistical learning models.

Machine Unlearning

Does Differential Privacy Prevent Backdoor Attacks in Practice?

no code implementations10 Nov 2023 Fereshteh Razmi, Jian Lou, Li Xiong

We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs.

ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach

no code implementations3 Nov 2023 Yuke Hu, Jian Lou, Jiaqi Liu, Wangze Ni, Feng Lin, Zhan Qin, Kui Ren

However, despite their promising efficiency, almost all existing machine unlearning methods handle unlearning requests independently from inference requests, which unfortunately introduces a new security issue of inference service obsolescence and a privacy vulnerability of undesirable exposure for machine unlearning in MLaaS.

Machine Unlearning

PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models

1 code implementation19 Oct 2023 Hongwei Yao, Jian Lou, Zhan Qin

Prompts have significantly improved the performance of pretrained Large Language Models (LLMs) on various downstream tasks recently, making them increasingly indispensable for a diverse range of LLM application scenarios.

Backdoor Attack

RemovalNet: DNN Fingerprint Removal Attacks

1 code implementation23 Aug 2023 Hongwei Yao, Zheng Li, Kunzhe Huang, Jian Lou, Zhan Qin, Kui Ren

After our DNN fingerprint removal attack, the model distance between the target and surrogate models is x100 times higher than that of the baseline attacks, (2) the RemovalNet is efficient.

Bilevel Optimization

FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis

1 code implementation10 Aug 2023 Yiling He, Jian Lou, Zhan Qin, Kui Ren

Although feature attribution (FA) methods can be used to explain deep learning, the underlying classifier is still blind to what behavior is suspicious, and the generated explanation cannot adapt to downstream tasks, incurring poor explanation fidelity and intelligibility.

Malware Analysis Multi-Task Learning

Pre-trained transformer for adversarial purification

no code implementations27 May 2023 Kai Wu, Yujian Betterest Li, Jian Lou, XiaoYu Zhang, Handing Wang, Jing Liu

It is frightening that deep neural networks are vulnerable and sensitive to adversarial attacks, the most common one of which for the services is evasion-based.

Wasserstein Adversarial Examples on Univariant Time Series Data

no code implementations22 Mar 2023 Wenjie Wang, Li Xiong, Jian Lou

In this work, we propose adversarial examples in the Wasserstein space for time series data for the first time and utilize Wasserstein distance to bound the perturbation between normal examples and adversarial examples.

Adversarial Attack Time Series

Federated Semi-Supervised Learning with Annotation Heterogeneity

no code implementations4 Mar 2023 Xinyi Shang, Gang Huang, Yang Lu, Jian Lou, Bo Han, Yiu-ming Cheung, Hanzi Wang

Federated Semi-Supervised Learning (FSSL) aims to learn a global model from different clients in an environment with both labeled and unlabeled data.

Explaining Adversarial Robustness of Neural Networks from Clustering Effect Perspective

1 code implementation ICCV 2023 Yulin Jin, XiaoYu Zhang, Jian Lou, Xu Ma, Zilong Wang, Xiaofeng Chen

The experimental evaluations manifest the superiority of SAT over other state-of-the-art AT mechanisms in defending against adversarial attacks against both output and intermediate layers.

Adversarial Attack Adversarial Robustness +1

MUter: Machine Unlearning on Adversarially Trained Models

no code implementations ICCV 2023 Junxu Liu, Mingsheng Xue, Jian Lou, XiaoYu Zhang, Li Xiong, Zhan Qin

However, existing methods focus exclusively on unlearning from standard training models and do not apply to adversarial training models (ATMs) despite their popularity as effective defenses against adversarial examples.

Machine Unlearning

Private Semi-supervised Knowledge Transfer for Deep Learning from Noisy Labels

no code implementations3 Nov 2022 Qiuchen Zhang, Jing Ma, Jian Lou, Li Xiong, Xiaoqian Jiang

PATE combines an ensemble of "teacher models" trained on sensitive data and transfers the knowledge to a "student" model through the noisy aggregation of teachers' votes for labeling unlabeled public data which the student model will be trained on.

Transfer Learning

DPAR: Decoupled Graph Neural Networks with Node-Level Differential Privacy

1 code implementation10 Oct 2022 Qiuchen Zhang, Hong kyu Lee, Jing Ma, Jian Lou, Carl Yang, Li Xiong

The key idea is to decouple the feature projection and message passing via a DP PageRank algorithm which learns the structure information and uses the top-$K$ neighbors determined by the PageRank for feature aggregation.

MULTIPAR: Supervised Irregular Tensor Factorization with Multi-task Learning

no code implementations1 Aug 2022 Yifei Ren, Jian Lou, Li Xiong, Joyce C Ho, Xiaoqian Jiang, Sivasubramanium Bhavani

By supervising the tensor factorization with downstream prediction tasks and leveraging information from multiple related predictive tasks, MULTIPAR can yield not only more meaningful phenotypes but also better predictive performance for downstream tasks.

Mortality Prediction Multi-Task Learning +1

Backdoor Attacks on Crowd Counting

1 code implementation12 Jul 2022 Yuhua Sun, Tailai Zhang, Xingjun Ma, Pan Zhou, Jian Lou, Zichuan Xu, Xing Di, Yu Cheng, Lichao

In this paper, we propose two novel Density Manipulation Backdoor Attacks (DMBA$^{-}$ and DMBA$^{+}$) to attack the model to produce arbitrarily large or small density estimations.

Backdoor Attack Crowd Counting +3

Vertical Federated Principal Component Analysis and Its Kernel Extension on Feature-wise Distributed Data

1 code implementation3 Mar 2022 Yiu-ming Cheung, Juyong Jiang, Feng Yu, Jian Lou

Despite enormous research interest and rapid application of federated learning (FL) to various areas, existing studies mostly focus on supervised federated learning under the horizontally partitioned local dataset setting.

Dimensionality Reduction Federated Learning

SNEAK: Synonymous Sentences-Aware Adversarial Attack on Natural Language Video Localization

no code implementations8 Dec 2021 Wenbo Gou, Wen Shi, Jian Lou, Lijie Huang, Pan Zhou, Ruixuan Li

Natural language video localization (NLVL) is an important task in the vision-language understanding area, which calls for an in-depth understanding of not only computer vision and natural language side alone, but more importantly the interplay between both sides.

Adversarial Attack Adversarial Robustness

Two Birds, One Stone: Achieving both Differential Privacy and Certified Robustness for Pre-trained Classifiers via Input Perturbation

no code implementations29 Sep 2021 Pengfei Tang, Wenjie Wang, Xiaolan Gu, Jian Lou, Li Xiong, Ming Li

To solve this challenge, a reconstruction network is built before the public pre-trained classifiers to offer certified robustness and defend against adversarial examples through input perturbation.

Image Classification

Communication Efficient Generalized Tensor Factorization for Decentralized Healthcare Networks

no code implementations3 Sep 2021 Jing Ma, Qiuchen Zhang, Jian Lou, Li Xiong, Sivasubramanium Bhavani, Joyce C. Ho

Tensor factorization has been proved as an efficient unsupervised learning approach for health data analysis, especially for computational phenotyping, where the high-dimensional Electronic Health Records (EHRs) with patients' history of medical procedures, medications, diagnosis, lab tests, etc., are converted to meaningful and interpretable medical concepts.

Computational Phenotyping

Temporal Network Embedding via Tensor Factorization

no code implementations22 Aug 2021 Jing Ma, Qiuchen Zhang, Jian Lou, Li Xiong, Joyce C. Ho

Representation learning on static graph-structured data has shown a significant impact on many real-world applications.

Link Prediction Network Embedding +1

SemiFed: Semi-supervised Federated Learning with Consistency and Pseudo-Labeling

no code implementations21 Aug 2021 Haowen Lin, Jian Lou, Li Xiong, Cyrus Shahabi

Federated learning enables multiple clients, such as mobile phones and organizations, to collaboratively learn a shared model for prediction while protecting local data privacy.

Data Augmentation Federated Learning +1

Integer-arithmetic-only Certified Robustness for Quantized Neural Networks

no code implementations ICCV 2021 Haowen Lin, Jian Lou, Li Xiong, Cyrus Shahabi

Adversarial data examples have drawn significant attention from the machine learning and security communities.

Quantization

RobustFed: A Truth Inference Approach for Robust Federated Learning

no code implementations18 Jul 2021 Farnaz Tahmasebian, Jian Lou, Li Xiong

Federated learning is a prominent framework that enables clients (e. g., mobile devices or organizations) to train a collaboratively global model under a central server's orchestration while keeping local training datasets' privacy.

Federated Learning

An Optimized H.266/VVC Software Decoder On Mobile Platform

no code implementations5 Mar 2021 Yiming Li, Shan Liu, Yu Chen, Yushan Zheng, Sijia Chen, Bin Zhu, Jian Lou

As the successor of H. 265/HEVC, the new versatile video coding standard (H. 266/VVC) can provide up to 50% bitrate saving with the same subjective quality, at the cost of increased decoding complexity.

4k

Just Noticeable Difference for Deep Machine Vision

no code implementations16 Feb 2021 Jian Jin, Xingxing Zhang, Xin Fu, huan zhang, Weisi Lin, Jian Lou, Yao Zhao

Experimental results on image classification demonstrate that we successfully find the JND for deep machine vision.

Image Classification Neural Network Security +1

Privacy-Preserving Tensor Factorization for Collaborative Health Data Analysis

no code implementations26 Aug 2019 Jing Ma, Qiuchen Zhang, Jian Lou, Joyce C. Ho, Li Xiong, Xiaoqian Jiang

We propose DPFact, a privacy-preserving collaborative tensor factorization method for computational phenotyping using EHR.

Computational Phenotyping Privacy Preserving

Sturm: Sparse Tubal-Regularized Multilinear Regression for fMRI

no code implementations4 Dec 2018 Wenwen Li, Jian Lou, Shuo Zhou, Haiping Lu

While functional magnetic resonance imaging (fMRI) is important for healthcare/neuroscience applications, it is challenging to classify or interpret due to its multi-dimensional structure, high dimensionality, and small number of samples available.

regression

Multidefender Security Games

no code implementations28 May 2015 Jian Lou, Andrew M. Smith, Yevgeniy Vorobeychik

Unlike most prior analysis, we focus on the situations in which each defender must protect multiple targets, so that even a single defender's best response decision is, in general, highly non-trivial.

Cannot find the paper you are looking for? You can Submit a new open access paper.