Search Results for author: Liyao Xiang

Found 11 papers, 1 papers with code

Privacy Threats Analysis to Secure Federated Learning

no code implementations24 Jun 2021 Yuchen Li, Yifan Bao, Liyao Xiang, Junhan Liu, Cen Chen, Li Wang, Xinbing Wang

Federated learning is emerging as a machine learning technique that trains a model across multiple decentralized parties.

Federated Learning

Improved Matrix Gaussian Mechanism for Differential Privacy

no code implementations30 Apr 2021 Jungang Yang, Liyao Xiang, Weiting Li, Wei Liu, Xinbing Wang

The wide deployment of machine learning in recent years gives rise to a great demand for large-scale and high-dimensional data, for which the privacy raises serious concern.

Certified Distributional Robustness via Smoothed Classifiers

no code implementations1 Jan 2021 Jungang Yang, Liyao Xiang, Ruidong Chen, Yukun Wang, Wei Wang, Xinbing Wang

We focus on certified robustness of smoothed classifiers in this work, and propose to use the worst-case population loss over noisy inputs as a robustness metric.

Certified Distributional Robustness on Smoothed Classifiers

no code implementations21 Oct 2020 Jungang Yang, Liyao Xiang, Ruidong Chen, Yukun Wang, Wei Wang, Xinbing Wang

For smoothed classifiers, we propose the worst-case adversarial loss over input distributions as a robustness certificate.

High-Order Relation Construction and Mining for Graph Matching

no code implementations9 Oct 2020 Hui Xu, Liyao Xiang, Youmin Le, Xiaoying Gan, Yuting Jia, Luoyi Fu, Xinbing Wang

Iterated line graphs are introduced for the first time to describe such high-order information, based on which we present a new graph matching method, called High-order Graph Matching Network (HGMN), to learn not only the local structural correspondence, but also the hyperedge relations across graphs.

Graph Matching

Achieving Adversarial Robustness via Sparsity

no code implementations11 Sep 2020 Shufan Wang, Ningyi Liao, Liyao Xiang, Nanyang Ye, Quanshi Zhang

Through experiments on a variety of adversarial pruning methods, we find that weights sparsity will not hurt but improve robustness, where both weights inheritance from the lottery ticket and adversarial training improve model robustness in network pruning.

Adversarial Robustness Network Pruning

Rotation-Equivariant Neural Networks for Privacy Protection

no code implementations21 Jun 2020 Hao Zhang, Yiting Chen, Haotian Ma, Xu Cheng, Qihan Ren, Liyao Xiang, Jie Shi, Quanshi Zhang

Compared to the traditional neural network, the RENN uses d-ary vectors/tensors as features, in which each element is a d-ary number.

Deep Quaternion Features for Privacy Protection

no code implementations18 Mar 2020 Hao Zhang, Yi-Ting Chen, Liyao Xiang, Haotian Ma, Jie Shi, Quanshi Zhang

We propose a method to revise the neural network to construct the quaternion-valued neural network (QNN), in order to prevent intermediate-layer features from leaking input information.

Privacy Preserving

Learning to Prevent Leakage: Privacy-Preserving Inference in the Mobile Cloud

no code implementations18 Dec 2019 Shuang Zhang, Liyao Xiang, CongCong Li, YiXuan Wang, Quanshi Zhang, Wei Wang, Bo Li

Powered by machine learning services in the cloud, numerous learning-driven mobile applications are gaining popularity in the market.

Neural Architecture Search Privacy Preserving +1

Interpretable Complex-Valued Neural Networks for Privacy Protection

1 code implementation ICLR 2020 Liyao Xiang, Haotian Ma, Hao Zhang, Yifan Zhang, Jie Ren, Quanshi Zhang

Previous studies have found that an adversary attacker can often infer unintended input information from intermediate-layer features.

Cannot find the paper you are looking for? You can Submit a new open access paper.