no code implementations • 24 Jun 2021 • Yuchen Li, Yifan Bao, Liyao Xiang, Junhan Liu, Cen Chen, Li Wang, Xinbing Wang
Federated learning is emerging as a machine learning technique that trains a model across multiple decentralized parties.
no code implementations • 30 Apr 2021 • Jungang Yang, Liyao Xiang, Weiting Li, Wei Liu, Xinbing Wang
The wide deployment of machine learning in recent years gives rise to a great demand for large-scale and high-dimensional data, for which the privacy raises serious concern.
no code implementations • 29 Apr 2021 • Shuang Zhang, Liyao Xiang, Xi Yu, Pengzhi Chu, Yingqi Chen, Chen Cen, Li Wang
Real-world data is usually segmented by attributes and distributed across different parties.
no code implementations • 1 Jan 2021 • Jungang Yang, Liyao Xiang, Ruidong Chen, Yukun Wang, Wei Wang, Xinbing Wang
We focus on certified robustness of smoothed classifiers in this work, and propose to use the worst-case population loss over noisy inputs as a robustness metric.
no code implementations • 21 Oct 2020 • Jungang Yang, Liyao Xiang, Ruidong Chen, Yukun Wang, Wei Wang, Xinbing Wang
For smoothed classifiers, we propose the worst-case adversarial loss over input distributions as a robustness certificate.
no code implementations • 9 Oct 2020 • Hui Xu, Liyao Xiang, Youmin Le, Xiaoying Gan, Yuting Jia, Luoyi Fu, Xinbing Wang
Iterated line graphs are introduced for the first time to describe such high-order information, based on which we present a new graph matching method, called High-order Graph Matching Network (HGMN), to learn not only the local structural correspondence, but also the hyperedge relations across graphs.
no code implementations • 11 Sep 2020 • Shufan Wang, Ningyi Liao, Liyao Xiang, Nanyang Ye, Quanshi Zhang
Through experiments on a variety of adversarial pruning methods, we find that weights sparsity will not hurt but improve robustness, where both weights inheritance from the lottery ticket and adversarial training improve model robustness in network pruning.
no code implementations • 21 Jun 2020 • Hao Zhang, Yiting Chen, Haotian Ma, Xu Cheng, Qihan Ren, Liyao Xiang, Jie Shi, Quanshi Zhang
Compared to the traditional neural network, the RENN uses d-ary vectors/tensors as features, in which each element is a d-ary number.
no code implementations • 18 Mar 2020 • Hao Zhang, Yi-Ting Chen, Liyao Xiang, Haotian Ma, Jie Shi, Quanshi Zhang
We propose a method to revise the neural network to construct the quaternion-valued neural network (QNN), in order to prevent intermediate-layer features from leaking input information.
no code implementations • 18 Dec 2019 • Shuang Zhang, Liyao Xiang, CongCong Li, YiXuan Wang, Quanshi Zhang, Wei Wang, Bo Li
Powered by machine learning services in the cloud, numerous learning-driven mobile applications are gaining popularity in the market.
1 code implementation • ICLR 2020 • Liyao Xiang, Haotian Ma, Hao Zhang, Yifan Zhang, Jie Ren, Quanshi Zhang
Previous studies have found that an adversary attacker can often infer unintended input information from intermediate-layer features.