1 code implementation • 13 Dec 2023 • Bang Wu, He Zhang, Xiangwen Yang, Shuo Wang, Minhui Xue, Shirui Pan, Xingliang Yuan
These limitations call for an effective and comprehensive solution that detects and mitigates data misuse without requiring exact training data while respecting the proprietary nature of such data.
1 code implementation • 17 Oct 2021 • Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan
We present and implement two types of attacks, i. e., training-based attacks and threshold-based attacks from different adversarial capabilities.
1 code implementation • 24 Oct 2020 • Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan
Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker pretending as a client.
no code implementations • 29 Aug 2019 • Bang Wu, Shuo Wang, Xingliang Yuan, Cong Wang, Carsten Rudolph, Xiangwen Yang
To avoid the bloated ensemble size during inference, we propose a two-phase defence, in which inference from the Student model is firstly performed to narrow down the candidate differentiators to be assembled, and later only a small, fixed number of them can be chosen to validate clean or reject adversarial inputs effectively.