no code implementations • EMNLP (ClinicalNLP) 2020 • Wenjie Wang, Youngja Park, Taesung Lee, Ian Molloy, Pengfei Tang, Li Xiong
Among the modalities of medical data, the clinical summaries have higher risks to be attacked because they are generated by third-party companies.
no code implementations • 29 Jan 2024 • Junxu Liu, Jian Lou, Li Xiong, Jinfei Liu, Xiaofeng Meng
Federated learning enhanced by differential privacy has emerged as a popular approach to better safeguard the privacy of client-side data by protecting clients' contributions during the training process.
no code implementations • 19 Jan 2024 • Hong kyu Lee, Qiuchen Zhang, Carl Yang, Jian Lou, Li Xiong
Machine unlearning aims to eliminate the influence of a subset of training samples (i. e., unlearning samples) from a trained model.
no code implementations • 10 Nov 2023 • Fereshteh Razmi, Jian Lou, Li Xiong
We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs.
1 code implementation • 23 Aug 2023 • Fumiyuki Kato, Li Xiong, Shun Takagi, Yang Cao, Masatoshi Yoshikawa
In this study, we present Uldp-FL, a novel FL framework designed to guarantee user-level DP in cross-silo FL where a single user's data may belong to multiple silos.
no code implementations • 11 Apr 2023 • Yixuan Liu, Suyun Zhao, Li Xiong, YuHan Liu, Hong Chen
In this work, a general framework (APES) is built up to strengthen model privacy under personalized local privacy by leveraging the privacy amplification effect of the shuffle model.
no code implementations • 22 Mar 2023 • Wenjie Wang, Li Xiong, Jian Lou
In this work, we propose adversarial examples in the Wasserstein space for time series data for the first time and utilize Wasserstein distance to bound the perturbation between normal examples and adversarial examples.
no code implementations • ICCV 2023 • Junxu Liu, Mingsheng Xue, Jian Lou, XiaoYu Zhang, Li Xiong, Zhan Qin
However, existing methods focus exclusively on unlearning from standard training models and do not apply to adversarial training models (ATMs) despite their popularity as effective defenses against adversarial examples.
no code implementations • 3 Nov 2022 • Qiuchen Zhang, Jing Ma, Jian Lou, Li Xiong, Xiaoqian Jiang
PATE combines an ensemble of "teacher models" trained on sensitive data and transfers the knowledge to a "student" model through the noisy aggregation of teachers' votes for labeling unlabeled public data which the student model will be trained on.
1 code implementation • 10 Oct 2022 • Qiuchen Zhang, Hong kyu Lee, Jing Ma, Jian Lou, Carl Yang, Li Xiong
The key idea is to decouple the feature projection and message passing via a DP PageRank algorithm which learns the structure information and uses the top-$K$ neighbors determined by the PageRank for feature aggregation.
no code implementations • 14 Sep 2022 • Rongmei Lin, Yonghui Xiao, Tien-Ju Yang, Ding Zhao, Li Xiong, Giovanni Motta, Françoise Beaufays
Automatic Speech Recognition models require large amount of speech data for training, and the collection of such data often leads to privacy concerns.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 1 Aug 2022 • Yifei Ren, Jian Lou, Li Xiong, Joyce C Ho, Xiaoqian Jiang, Sivasubramanium Bhavani
By supervising the tensor factorization with downstream prediction tasks and leveraging information from multiple related predictive tasks, MULTIPAR can yield not only more meaningful phenotypes but also better predictive performance for downstream tasks.
no code implementations • 5 Dec 2021 • Payam Karisani, Negin Karisani, Li Xiong
Our model has three novelties: 1) It is the first approach to employ multi-view active learning in this domain.
2 code implementations • VLDB 2022 2021 • Junxu Liu, Li Xiong, Jinfei Liu, Xiaofeng Meng
The challenge is how to use such information without biasing the joint model.
no code implementations • 22 Oct 2021 • Xiaolan Gu, Ming Li, Li Xiong
In this paper, we develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography.
no code implementations • 29 Sep 2021 • Pengfei Tang, Wenjie Wang, Xiaolan Gu, Jian Lou, Li Xiong, Ming Li
To solve this challenge, a reconstruction network is built before the public pre-trained classifiers to offer certified robustness and defend against adversarial examples through input perturbation.
no code implementations • 29 Sep 2021 • Phung Lai, Hai Phan, Li Xiong, Khang Phuc Tran, My Thai, Tong Sun, Franck Dernoncourt, Jiuxiang Gu, Nikolaos Barmpalios, Rajiv Jain
In this paper, we develop BitRand, a bit-aware randomized response algorithm, to preserve local differential privacy (LDP) in federated learning (FL).
no code implementations • 3 Sep 2021 • Jing Ma, Qiuchen Zhang, Jian Lou, Li Xiong, Sivasubramanium Bhavani, Joyce C. Ho
Tensor factorization has been proved as an efficient unsupervised learning approach for health data analysis, especially for computational phenotyping, where the high-dimensional Electronic Health Records (EHRs) with patients' history of medical procedures, medications, diagnosis, lab tests, etc., are converted to meaningful and interpretable medical concepts.
no code implementations • 22 Aug 2021 • Jing Ma, Qiuchen Zhang, Jian Lou, Li Xiong, Joyce C. Ho
Representation learning on static graph-structured data has shown a significant impact on many real-world applications.
no code implementations • 21 Aug 2021 • Haowen Lin, Jian Lou, Li Xiong, Cyrus Shahabi
Federated learning enables multiple clients, such as mobile phones and organizations, to collaboratively learn a shared model for prediction while protecting local data privacy.
no code implementations • ICCV 2021 • Haowen Lin, Jian Lou, Li Xiong, Cyrus Shahabi
Adversarial data examples have drawn significant attention from the machine learning and security communities.
1 code implementation • 9 Aug 2021 • Fereshteh Razmi, Li Xiong
Poisoning attacks are a category of adversarial machine learning threats in which an adversary attempts to subvert the outcome of the machine learning systems by injecting crafted data into training data set, thus increasing the machine learning model's test error.
no code implementations • 18 Jul 2021 • Farnaz Tahmasebian, Jian Lou, Li Xiong
Federated learning is a prominent framework that enables clients (e. g., mobile devices or organizations) to train a collaboratively global model under a central server's orchestration while keeping local training datasets' privacy.
1 code implementation • NeurIPS 2021 • Han Xie, Jing Ma, Li Xiong, Carl Yang
Federated learning has emerged as an important paradigm for training machine learning models in different domains.
no code implementations • 8 Jun 2021 • Rongmei Lin, Xiang He, Jie Feng, Nasser Zalmout, Yan Liang, Li Xiong, Xin Luna Dong
Understanding product attributes plays an important role in improving online shopping experience for customers and serves as an integral part for constructing a product knowledge graph.
no code implementations • NAACL 2021 • Wenjie Wang, Pengfei Tang, Jian Lou, Li Xiong
The robustness and security of natural language processing (NLP) models are significantly important in real-world applications.
no code implementations • NAACL (SMM4H) 2021 • Payam Karisani, Jinho D. Choi, Li Xiong
Then a classifier is trained on each view to label a set of unlabeled documents to be used as an initializer for a new classifier in the other view.
no code implementations • 31 Mar 2021 • Mani Sotoodeh, Li Xiong, Joyce C. Ho
Samples with ground truth labels may not always be available in numerous domains.
1 code implementation • 2 Mar 2021 • Weiyang Liu, Rongmei Lin, Zhen Liu, Li Xiong, Bernhard Schölkopf, Adrian Weller
Due to the over-parameterization nature, neural networks are a powerful tool for nonlinear function approximation.
no code implementations • 28 Jan 2021 • Salman Seyedi, Li Xiong, Shamim Nemati, Gari D. Clifford
Modern deep learning algorithms have the highest potential of such leakage due to complexity of the models.
no code implementations • 26 Jan 2021 • Shuaicheng Ma, Yang Cao, Li Xiong
In this work, we propose a blockchain-based federated learning framework and a protocol to transparently evaluate each participant's contribution.
no code implementations • 1 Jan 2021 • Rongmei Lin, Hanjun Dai, Li Xiong, Wei Wei
We propose a generative fairness teaching framework that provides a model with not only real samples but also synthesized samples to compensate the data biases during training.
no code implementations • 21 Jun 2020 • Jing Ma, Qiuchen Zhang, Joyce C. Ho, Li Xiong
In this paper, we propose SkeTenSmooth, a novel tensor factorization framework that uses adaptive sampling to compress the tensor in a temporally streaming fashion and preserves the underlying global structure.
3 code implementations • 4 May 2020 • Yang Cao, Yonghui Xiao, Shun Takagi, Li Xiong, Masatoshi Yoshikawa, Yilin Shen, Jinfei Liu, Hongxia Jin, Xiaofeng Xu
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
Cryptography and Security Computers and Society
3 code implementations • 1 May 2020 • Yang Cao, Shun Takagi, Yonghui Xiao, Li Xiong, Masatoshi Yoshikawa
Our system has three primary functions for epidemic surveillance: location monitoring, epidemic analysis, and contact tracing.
Databases Cryptography and Security
1 code implementation • CVPR 2021 • Weiyang Liu, Rongmei Lin, Zhen Liu, James M. Rehg, Liam Paull, Li Xiong, Le Song, Adrian Weller
The inductive bias of a neural network is largely determined by the architecture and the training algorithm.
8 code implementations • 10 Dec 2019 • Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches.
no code implementations • 26 Aug 2019 • Jing Ma, Qiuchen Zhang, Jian Lou, Joyce C. Ho, Li Xiong, Xiaoqian Jiang
We propose DPFact, a privacy-preserving collaborative tensor factorization method for computational phenotyping using EHR.
1 code implementation • CVPR 2020 • Rongmei Lin, Weiyang Liu, Zhen Liu, Chen Feng, Zhiding Yu, James M. Rehg, Li Xiong, Le Song
Inspired by the Thomson problem in physics where the distribution of multiple propelling electrons on a unit sphere can be modeled via minimizing some potential energy, hyperspherical energy minimization has demonstrated its potential in regularizing neural networks and improving their generalization power.
no code implementations • 2 May 2019 • Wenhui Yu, Xiangnan He, Jian Pei, Xu Chen, Li Xiong, Jinfei Liu, Zheng Qin
While recent developments on visually-aware recommender systems have taken the product image into account, none of them has considered the aesthetic aspect.
no code implementations • 16 Sep 2018 • Wenhui Yu, Huidi Zhang, Xiangnan He, Xu Chen, Li Xiong, Zheng Qin
Considering that the aesthetic preference varies significantly from user to user and by time, we then propose a new tensor factorization model to incorporate the aesthetic features in a personalized manner.
2 code implementations • 29 Nov 2017 • Yang Cao, Masatoshi Yoshikawa, Yonghui Xiao, Li Xiong
Our analysis reveals that, the event-level privacy loss of a DP mechanism may \textit{increase over time}.
Databases
2 code implementations • 24 Oct 2016 • Yang Cao, Masatoshi Yoshikawa, Yonghui Xiao, Li Xiong
Our analysis reveals that the privacy leakage of a DP mechanism may accumulate and increase over time.
Databases Cryptography and Security