Search Results for author: Liang Tong

Found 9 papers, 4 papers with code

Personalized Federated Learning via Heterogeneous Modular Networks

1 code implementation26 Oct 2022 Tianchun Wang, Wei Cheng, Dongsheng Luo, Wenchao Yu, Jingchao Ni, Liang Tong, Haifeng Chen, Xiang Zhang

Personalized Federated Learning (PFL) which collaboratively trains a federated model while considering local clients under privacy constraints has attracted much attention.

Personalized Federated Learning

FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification

no code implementations25 Oct 2022 Yulin Zhu, Liang Tong, Gaolei Li, Xiapu Luo, Kai Zhou

Graph Neural Networks (GNNs) are vulnerable to data poisoning attacks, which will generate a poisoned graph as the input to the GNN models.

Adversarial Robustness Data Poisoning +2

FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems

1 code implementation CVPR 2021 Liang Tong, Zhengzhang Chen, Jingchao Ni, Wei Cheng, Dongjin Song, Haifeng Chen, Yevgeniy Vorobeychik

Moreover, we observe that open-set face recognition systems are more vulnerable than closed-set systems under different types of attacks.

Face Recognition

Towards Robustness against Unsuspicious Adversarial Examples

no code implementations8 May 2020 Liang Tong, Minzhe Guo, Atul Prakash, Yevgeniy Vorobeychik

We then experimentally demonstrate that our attacks indeed do not significantly change perceptual salience of the background, but are highly effective against classifiers robust to conventional attacks.

Defending Against Physically Realizable Attacks on Image Classification

2 code implementations ICLR 2020 Tong Wu, Liang Tong, Yevgeniy Vorobeychik

Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks.

Classification General Classification +1

Finding Needles in a Moving Haystack: Prioritizing Alerts with Adversarial Reinforcement Learning

no code implementations20 Jun 2019 Liang Tong, Aron Laszka, Chao Yan, Ning Zhang, Yevgeniy Vorobeychik

We then use these in a double-oracle framework to obtain an approximate equilibrium of the game, which in turn yields a robust stochastic policy for the defender.

Intrusion Detection reinforcement-learning +1

Adversarial Regression with Multiple Learners

1 code implementation ICML 2018 Liang Tong, Sixie Yu, Scott Alfeld, Yevgeniy Vorobeychik

We present an algorithm for computing this equilibrium, and show through extensive experiments that equilibrium models are significantly more robust than conventional regularized linear regression.

regression

Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features

no code implementations28 Aug 2017 Liang Tong, Bo Li, Chen Hajaj, Chaowei Xiao, Ning Zhang, Yevgeniy Vorobeychik

A conventional approach to evaluate ML robustness to such attacks, as well as to design robust ML, is by considering simplified feature-space models of attacks, where the attacker changes ML features directly to effect evasion, while minimizing or constraining the magnitude of this change.

Intrusion Detection Malware Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.