Federated learning is emerging as a machine learning technique that trains a model across multiple decentralized parties.
The wide deployment of machine learning in recent years gives rise to a great demand for large-scale and high-dimensional data, for which the privacy raises serious concern.
Real-world data is usually segmented by attributes and distributed across different parties.
We focus on certified robustness of smoothed classifiers in this work, and propose to use the worst-case population loss over noisy inputs as a robustness metric.
For smoothed classifiers, we propose the worst-case adversarial loss over input distributions as a robustness certificate.
Iterated line graphs are introduced for the first time to describe such high-order information, based on which we present a new graph matching method, called High-order Graph Matching Network (HGMN), to learn not only the local structural correspondence, but also the hyperedge relations across graphs.
Through experiments on a variety of adversarial pruning methods, we find that weights sparsity will not hurt but improve robustness, where both weights inheritance from the lottery ticket and adversarial training improve model robustness in network pruning.
Compared to the traditional neural network, the RENN uses d-ary vectors/tensors as features, in which each element is a d-ary number.
We propose a method to revise the neural network to construct the quaternion-valued neural network (QNN), in order to prevent intermediate-layer features from leaking input information.
Powered by machine learning services in the cloud, numerous learning-driven mobile applications are gaining popularity in the market.
Previous studies have found that an adversary attacker can often infer unintended input information from intermediate-layer features.