Search Results for author: Li Xiong

Found 46 papers, 14 papers with code

DPDR: Gradient Decomposition and Reconstruction for Differentially Private Deep Learning

no code implementations4 Jun 2024 Yixuan Liu, Li Xiong, YuHan Liu, Yujie Gu, Ruixuan Liu, Hong Chen

Third, the model is updated with the gradient reconstructed from recycled common knowledge and noisy incremental information.

Differentially Private Tabular Data Synthesis using Large Language Models

no code implementations3 Jun 2024 Toan V. Tran, Li Xiong

This paper introduces DP-LLMTGen -- a novel framework for differentially private tabular data synthesis that leverages pretrained large language models (LLMs).

Fairness

HRNet: Differentially Private Hierarchical and Multi-Resolution Network for Human Mobility Data Synthesization

1 code implementation13 May 2024 Shun Takagi, Li Xiong, Fumiyuki Kato, Yang Cao, Masatoshi Yoshikawa

Human mobility data offers valuable insights for many applications such as urban planning and pandemic response, but its use also raises privacy concerns.

Multi-Task Learning

Cross-silo Federated Learning with Record-level Personalized Differential Privacy

no code implementations29 Jan 2024 Junxu Liu, Jian Lou, Li Xiong, Jinfei Liu, Xiaofeng Meng

Federated learning enhanced by differential privacy has emerged as a popular approach to better safeguard the privacy of client-side data by protecting clients' contributions during the training process.

Federated Learning

Contrastive Unlearning: A Contrastive Approach to Machine Unlearning

no code implementations19 Jan 2024 Hong kyu Lee, Qiuchen Zhang, Carl Yang, Jian Lou, Li Xiong

Machine unlearning aims to eliminate the influence of a subset of training samples (i. e., unlearning samples) from a trained model.

Machine Unlearning Representation Learning

Does Differential Privacy Prevent Backdoor Attacks in Practice?

no code implementations10 Nov 2023 Fereshteh Razmi, Jian Lou, Li Xiong

We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs.

ULDP-FL: Federated Learning with Across Silo User-Level Differential Privacy

1 code implementation23 Aug 2023 Fumiyuki Kato, Li Xiong, Shun Takagi, Yang Cao, Masatoshi Yoshikawa

In this study, we present Uldp-FL, a novel FL framework designed to guarantee user-level DP in cross-silo FL where a single user's data may belong to multiple silos.

Federated Learning

Echo of Neighbors: Privacy Amplification for Personalized Private Federated Learning with Shuffle Model

no code implementations11 Apr 2023 Yixuan Liu, Suyun Zhao, Li Xiong, YuHan Liu, Hong Chen

In this work, a general framework (APES) is built up to strengthen model privacy under personalized local privacy by leveraging the privacy amplification effect of the shuffle model.

Federated Learning

Wasserstein Adversarial Examples on Univariant Time Series Data

no code implementations22 Mar 2023 Wenjie Wang, Li Xiong, Jian Lou

In this work, we propose adversarial examples in the Wasserstein space for time series data for the first time and utilize Wasserstein distance to bound the perturbation between normal examples and adversarial examples.

Adversarial Attack Time Series

MUter: Machine Unlearning on Adversarially Trained Models

no code implementations ICCV 2023 Junxu Liu, Mingsheng Xue, Jian Lou, XiaoYu Zhang, Li Xiong, Zhan Qin

However, existing methods focus exclusively on unlearning from standard training models and do not apply to adversarial training models (ATMs) despite their popularity as effective defenses against adversarial examples.

Machine Unlearning

Private Semi-supervised Knowledge Transfer for Deep Learning from Noisy Labels

no code implementations3 Nov 2022 Qiuchen Zhang, Jing Ma, Jian Lou, Li Xiong, Xiaoqian Jiang

PATE combines an ensemble of "teacher models" trained on sensitive data and transfers the knowledge to a "student" model through the noisy aggregation of teachers' votes for labeling unlabeled public data which the student model will be trained on.

Transfer Learning

DPAR: Decoupled Graph Neural Networks with Node-Level Differential Privacy

1 code implementation10 Oct 2022 Qiuchen Zhang, Hong kyu Lee, Jing Ma, Jian Lou, Carl Yang, Li Xiong

The key idea is to decouple the feature projection and message passing via a DP PageRank algorithm which learns the structure information and uses the top-$K$ neighbors determined by the PageRank for feature aggregation.

Federated Pruning: Improving Neural Network Efficiency with Federated Learning

no code implementations14 Sep 2022 Rongmei Lin, Yonghui Xiao, Tien-Ju Yang, Ding Zhao, Li Xiong, Giovanni Motta, Françoise Beaufays

Automatic Speech Recognition models require large amount of speech data for training, and the collection of such data often leads to privacy concerns.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

MULTIPAR: Supervised Irregular Tensor Factorization with Multi-task Learning

no code implementations1 Aug 2022 Yifei Ren, Jian Lou, Li Xiong, Joyce C Ho, Xiaoqian Jiang, Sivasubramanium Bhavani

By supervising the tensor factorization with downstream prediction tasks and leveraging information from multiple related predictive tasks, MULTIPAR can yield not only more meaningful phenotypes but also better predictive performance for downstream tasks.

Mortality Prediction Multi-Task Learning +1

Multi-View Active Learning for Short Text Classification in User-Generated Data

no code implementations5 Dec 2021 Payam Karisani, Negin Karisani, Li Xiong

Our model has three novelties: 1) It is the first approach to employ multi-view active learning in this domain.

Active Learning Language Modelling +2

PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy

no code implementations22 Oct 2021 Xiaolan Gu, Ming Li, Li Xiong

In this paper, we develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography.

Federated Learning Model Poisoning +1

Two Birds, One Stone: Achieving both Differential Privacy and Certified Robustness for Pre-trained Classifiers via Input Perturbation

no code implementations29 Sep 2021 Pengfei Tang, Wenjie Wang, Xiaolan Gu, Jian Lou, Li Xiong, Ming Li

To solve this challenge, a reconstruction network is built before the public pre-trained classifiers to offer certified robustness and defend against adversarial examples through input perturbation.

Image Classification

Bit-aware Randomized Response for Local Differential Privacy in Federated Learning

no code implementations29 Sep 2021 Phung Lai, Hai Phan, Li Xiong, Khang Phuc Tran, My Thai, Tong Sun, Franck Dernoncourt, Jiuxiang Gu, Nikolaos Barmpalios, Rajiv Jain

In this paper, we develop BitRand, a bit-aware randomized response algorithm, to preserve local differential privacy (LDP) in federated learning (FL).

Federated Learning Image Classification

Communication Efficient Generalized Tensor Factorization for Decentralized Healthcare Networks

no code implementations3 Sep 2021 Jing Ma, Qiuchen Zhang, Jian Lou, Li Xiong, Sivasubramanium Bhavani, Joyce C. Ho

Tensor factorization has been proved as an efficient unsupervised learning approach for health data analysis, especially for computational phenotyping, where the high-dimensional Electronic Health Records (EHRs) with patients' history of medical procedures, medications, diagnosis, lab tests, etc., are converted to meaningful and interpretable medical concepts.

Computational Phenotyping

Temporal Network Embedding via Tensor Factorization

no code implementations22 Aug 2021 Jing Ma, Qiuchen Zhang, Jian Lou, Li Xiong, Joyce C. Ho

Representation learning on static graph-structured data has shown a significant impact on many real-world applications.

Link Prediction Network Embedding +1

Integer-arithmetic-only Certified Robustness for Quantized Neural Networks

no code implementations ICCV 2021 Haowen Lin, Jian Lou, Li Xiong, Cyrus Shahabi

Adversarial data examples have drawn significant attention from the machine learning and security communities.

Quantization

SemiFed: Semi-supervised Federated Learning with Consistency and Pseudo-Labeling

no code implementations21 Aug 2021 Haowen Lin, Jian Lou, Li Xiong, Cyrus Shahabi

Federated learning enables multiple clients, such as mobile phones and organizations, to collaboratively learn a shared model for prediction while protecting local data privacy.

Data Augmentation Federated Learning +1

Classification Auto-Encoder based Detector against Diverse Data Poisoning Attacks

1 code implementation9 Aug 2021 Fereshteh Razmi, Li Xiong

Poisoning attacks are a category of adversarial machine learning threats in which an adversary attempts to subvert the outcome of the machine learning systems by injecting crafted data into training data set, thus increasing the machine learning model's test error.

BIG-bench Machine Learning Classification +1

RobustFed: A Truth Inference Approach for Robust Federated Learning

no code implementations18 Jul 2021 Farnaz Tahmasebian, Jian Lou, Li Xiong

Federated learning is a prominent framework that enables clients (e. g., mobile devices or organizations) to train a collaboratively global model under a central server's orchestration while keeping local training datasets' privacy.

Federated Learning

Federated Graph Classification over Non-IID Graphs

1 code implementation NeurIPS 2021 Han Xie, Jing Ma, Li Xiong, Carl Yang

Federated learning has emerged as an important paradigm for training machine learning models in different domains.

Clustering Dynamic Time Warping +4

PAM: Understanding Product Images in Cross Product Category Attribute Extraction

no code implementations8 Jun 2021 Rongmei Lin, Xiang He, Jie Feng, Nasser Zalmout, Yan Liang, Li Xiong, Xin Luna Dong

Understanding product attributes plays an important role in improving online shopping experience for customers and serves as an integral part for constructing a product knowledge graph.

Attribute Attribute Extraction +4

View Distillation with Unlabeled Data for Extracting Adverse Drug Effects from User-Generated Data

no code implementations NAACL (SMM4H) 2021 Payam Karisani, Jinho D. Choi, Li Xiong

Then a classifier is trained on each view to label a set of unlabeled documents to be used as an initializer for a new classifier in the other view.

Word Embeddings

Learning with Hyperspherical Uniformity

1 code implementation2 Mar 2021 Weiyang Liu, Rongmei Lin, Zhen Liu, Li Xiong, Bernhard Schölkopf, Adrian Weller

Due to the over-parameterization nature, neural networks are a powerful tool for nonlinear function approximation.

Inductive Bias L2 Regularization

Transparent Contribution Evaluation for Secure Federated Learning on Blockchain

no code implementations26 Jan 2021 Shuaicheng Ma, Yang Cao, Li Xiong

In this work, we propose a blockchain-based federated learning framework and a protocol to transparently evaluate each participant's contribution.

BIG-bench Machine Learning Federated Learning

Generative Fairness Teaching

no code implementations1 Jan 2021 Rongmei Lin, Hanjun Dai, Li Xiong, Wei Wei

We propose a generative fairness teaching framework that provides a model with not only real samples but also synthesized samples to compensate the data biases during training.

Fairness

Spatio-Temporal Tensor Sketching via Adaptive Sampling

no code implementations21 Jun 2020 Jing Ma, Qiuchen Zhang, Joyce C. Ho, Li Xiong

In this paper, we propose SkeTenSmooth, a novel tensor factorization framework that uses adaptive sampling to compress the tensor in a temporally streaming fashion and preserves the underlying global structure.

Management

PGLP: Customizable and Rigorous Location Privacy through Policy Graph

3 code implementations4 May 2020 Yang Cao, Yonghui Xiao, Shun Takagi, Li Xiong, Masatoshi Yoshikawa, Yilin Shen, Jinfei Liu, Hongxia Jin, Xiaofeng Xu

Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.

Cryptography and Security Computers and Society

PANDA: Policy-aware Location Privacy for Epidemic Surveillance

3 code implementations1 May 2020 Yang Cao, Shun Takagi, Yonghui Xiao, Li Xiong, Masatoshi Yoshikawa

Our system has three primary functions for epidemic surveillance: location monitoring, epidemic analysis, and contact tracing.

Databases Cryptography and Security

Privacy-Preserving Tensor Factorization for Collaborative Health Data Analysis

no code implementations26 Aug 2019 Jing Ma, Qiuchen Zhang, Jian Lou, Joyce C. Ho, Li Xiong, Xiaoqian Jiang

We propose DPFact, a privacy-preserving collaborative tensor factorization method for computational phenotyping using EHR.

Computational Phenotyping Privacy Preserving

Regularizing Neural Networks via Minimizing Hyperspherical Energy

1 code implementation CVPR 2020 Rongmei Lin, Weiyang Liu, Zhen Liu, Chen Feng, Zhiding Yu, James M. Rehg, Li Xiong, Le Song

Inspired by the Thomson problem in physics where the distribution of multiple propelling electrons on a unit sphere can be modeled via minimizing some potential energy, hyperspherical energy minimization has demonstrated its potential in regularizing neural networks and improving their generalization power.

Visually-aware Recommendation with Aesthetic Features

no code implementations2 May 2019 Wenhui Yu, Xiangnan He, Jian Pei, Xu Chen, Li Xiong, Jinfei Liu, Zheng Qin

While recent developments on visually-aware recommender systems have taken the product image into account, none of them has considered the aesthetic aspect.

Decision Making Recommendation Systems +1

Aesthetic-based Clothing Recommendation

no code implementations16 Sep 2018 Wenhui Yu, Huidi Zhang, Xiangnan He, Xu Chen, Li Xiong, Zheng Qin

Considering that the aesthetic preference varies significantly from user to user and by time, we then propose a new tensor factorization model to incorporate the aesthetic features in a personalized manner.

Recommendation Systems

Quantifying Differential Privacy in Continuous Data Release under Temporal Correlations

2 code implementations29 Nov 2017 Yang Cao, Masatoshi Yoshikawa, Yonghui Xiao, Li Xiong

Our analysis reveals that, the event-level privacy loss of a DP mechanism may \textit{increase over time}.

Databases

Quantifying Differential Privacy under Temporal Correlations

2 code implementations24 Oct 2016 Yang Cao, Masatoshi Yoshikawa, Yonghui Xiao, Li Xiong

Our analysis reveals that the privacy leakage of a DP mechanism may accumulate and increase over time.

Databases Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.