no code implementations • 13 Jan 2025 • Guozhi Yuan, Youfeng Liu, Jingli Yang, Wei Jia, Kai Lin, Yansong Gao, Shan He, Zilin Ding, Haitao Li
Code Action addresses these issues while also introducing the challenges of a more complex action space and more difficult action organization.
no code implementations • 7 Nov 2024 • Yongqi Jiang, Yansong Gao, Chunyi Zhou, Hongsheng Hu, Anmin Fu, Willy Susilo
Consequently, safeguarding the Intellectual Property (IP) of well-trained models is attracting increasing attention.
no code implementations • 10 Oct 2024 • Lu Yang, Seyit Camtepe, Yansong Gao, Vicky Liu, Dhammika Jayalath
Experimental results show that the proposed RFFI system achieved an average classification accuracy improvement of 33. 3 % in indoor environments and 34. 5 % in outdoor environments.
1 code implementation • 23 May 2024 • Shengfang Zhai, Huanran Chen, Yinpeng Dong, Jiajun Li, Qingni Shen, Yansong Gao, Hang Su, Yang Liu
Text-to-image diffusion models have achieved tremendous success in the field of controllable image generation, while also coming along with issues of privacy leakage and data copyrights.
no code implementations • 13 Mar 2024 • Na Li, Chunyi Zhou, Yansong Gao, Hui Chen, Anmin Fu, Zhi Zhang, Yu Shui
Data users have been endowed with the right to be forgotten of their data.
1 code implementation • 1 Oct 2023 • Hua Ma, Shang Wang, Yansong Gao, Zhi Zhang, Huming Qiu, Minhui Xue, Alsharif Abuadbba, Anmin Fu, Surya Nepal, Derek Abbott
In VCB attacks, any sample from a class activates the implanted backdoor when the secret trigger is present.
no code implementations • 22 Apr 2023 • Yansong Gao, Zhihong Pan, Xin Zhou, Le Kang, Pratik Chaudhari
This work analyzes how the backward error affects the diffusion ODEs and the sample quality in DDPMs.
no code implementations • 27 Feb 2023 • Lu Yang, Seyit Camtepe, Yansong Gao, Vicky Liu, Dhammika Jayalath
The resulting radio frequency fingerprints (RFFs) are distorted, leading to low device detection and classification accuracy.
no code implementations • 3 Feb 2023 • Qun Li, Chandra Thapa, Lawrence Ong, Yifeng Zheng, Hua Ma, Seyit A. Camtepe, Anmin Fu, Yansong Gao
In a number of practical scenarios, VFL is more relevant than HFL as different companies (e. g., bank and retailer) hold different features (e. g., credit history and shopping history) for the same set of customers.
no code implementations • 24 Nov 2022 • Seonhye Park, Alsharif Abuadbba, Shuo Wang, Kristen Moore, Yansong Gao, Hyoungshick Kim, Surya Nepal
In this study, we introduce DeepTaster, a novel DNN fingerprinting technique, to address scenarios where a victim's data is unlawfully used to build a suspect model.
1 code implementation • 6 Sep 2022 • Hua Ma, Yinshan Li, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Anmin Fu, Said F. Al-Sarawi, Nepal Surya, Derek Abbott
We observe that the backdoor effect of both misclassification and the cloaking are robustly achieved in the wild when the backdoor is activated with inconspicuously natural physical triggers.
no code implementations • 10 Jun 2022 • Ngoc Duy Pham, Alsharif Abuadbba, Yansong Gao, Tran Khoa Phan, Naveen Chilamkurti
Experimental results with different datasets have affirmed the advantages of the B-SL models compared with several benchmark models.
no code implementations • 31 May 2022 • Shang Wang, Yansong Gao, Anmin Fu, Zhi Zhang, Yuqing Zhang, Willy Susilo, Dongxi Liu
Compared with a representative SSBA as a baseline ($SSBA_{Base}$), $CASSOCK$-based attacks have significantly advanced the attack performance, i. e., higher ASR and lower FPR with comparable CDA (clean data accuracy).
no code implementations • 14 Apr 2022 • Yansong Gao, Jie Zhang
That is, mechanism K is pointwise better than mechanism P. Next, for each task $j$, when machines' execution costs $t_i^j$ are independent and identically drawn from a task-specific distribution $F^j(t)$, we show that the average-case approximation ratio of mechanism K converges to a constant.
no code implementations • 13 Apr 2022 • Huming Qiu, Hua Ma, Zhi Zhang, Alsharif Abuadbba, Wei Kang, Anmin Fu, Yansong Gao
Since Deep Learning (DL) backdoor attacks have been revealed as one of the most insidious adversarial attacks, a number of countermeasures have been developed with certain assumptions defined in their respective threat models.
no code implementations • 10 Feb 2022 • Chunyi Zhou, Yansong Gao, Anmin Fu, Kai Chen, Zhiyang Dai, Zhi Zhang, Minhui Xue, Yuqing Zhang
By observing a user model's gradient sensitivity to a class, PPA can profile the sample proportion of the class in the user's local dataset, and thus the user's preference of the class is exposed.
2 code implementations • pproximateinference AABI Symposium 2022 • Yansong Gao, Rahul Ramesh, Pratik Chaudhari
Such priors enable the task to maximally affect the Bayesian posterior, e. g., reference priors depend upon the number of samples available for learning the task and for very small sample sizes, the prior puts more probability mass on low-complexity models in the hypothesis space.
no code implementations • 21 Jan 2022 • Hua Ma, Yinshan Li, Yansong Gao, Alsharif Abuadbba, Zhi Zhang, Anmin Fu, Hyoungshick Kim, Said F. Al-Sarawi, Nepal Surya, Derek Abbott
The averaged ASR still remains sufficiently high to be 78% in the transfer learning attack scenarios evaluated on CenterNet.
no code implementations • 22 Nov 2021 • Yinshan Li, Hua Ma, Zhi Zhang, Yansong Gao, Alsharif Abuadbba, Anmin Fu, Yifeng Zheng, Said F. Al-Sarawi, Derek Abbott
A backdoor deep learning (DL) model behaves normally upon clean inputs but misbehaves upon trigger inputs as the backdoor attacker desires, posing severe consequences to DL model deployments.
no code implementations • 20 Aug 2021 • Hua Ma, Huming Qiu, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Minhui Xue, Anmin Fu, Zhang Jiliang, Said Al-Sarawi, Derek Abbott
This work reveals that the standard quantization toolkits can be abused to activate a backdoor.
no code implementations • 9 May 2021 • Huming Qiu, Hua Ma, Zhi Zhang, Yifeng Zheng, Anmin Fu, Pan Zhou, Yansong Gao, Derek Abbott, Said F. Al-Sarawi
To this end, a 1-bit quantized DNN model or deep binary neural network maximizes the memory efficiency, where each parameter in a BNN model has only 1-bit.
1 code implementation • 3 Mar 2021 • Yansong Gao, Minki Kim, Chandra Thapa, Sharif Abuadbba, Zhi Zhang, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal
Federated learning (FL) and split learning (SL) are state-of-the-art distributed machine learning techniques to enable machine learning training without accessing raw data on clients or end devices.
no code implementations • 1 Mar 2021 • Tom Roth, Yansong Gao, Alsharif Abuadbba, Surya Nepal, Wei Liu
Many adversarial attacks target natural language processing systems, most of which succeed through modifying the individual tokens of a document.
1 code implementation • NeurIPS Workshop DL-IG 2020 • Yansong Gao, Pratik Chaudhari
Using tools in information geometry, the distance is defined to be the length of the shortest weight trajectory on a Riemannian manifold as a classifier is fitted on an interpolated task.
no code implementations • 8 Oct 2020 • Bedeuro Kim, Alsharif Abuadbba, Yansong Gao, Yifeng Zheng, Muhammad Ejaz Ahmed, Hyoungshick Kim, Surya Nepal
To corroborate the efficiency of Decamouflage, we have also measured its run-time overhead on a personal PC with an i5 CPU and found that Decamouflage can detect image-scaling attacks in milliseconds.
no code implementations • 27 Jul 2020 • Anmin Fu, Xianglong Zhang, Naixue Xiong, Yansong Gao, Huaqun Wang
If no more than n-2 of n participants collude with the aggregation server, VFL could guarantee the encrypted gradients of other participants not being inverted.
Cryptography and Security E.3; I.2.11
no code implementations • 27 Jul 2020 • Chandra Thapa, Jun Wen Tang, Alsharif Abuadbba, Yansong Gao, Seyit Camtepe, Surya Nepal, Mahathir Almashor, Yifeng Zheng
For a fixed total email dataset, the global RNN based model suffers by a 1. 8% accuracy drop when increasing organizational counts from 2 to 10.
1 code implementation • 21 Jul 2020 • Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim
We have also reviewed the flip side of backdoor attacks, which are explored for i) protecting intellectual property of deep learning models, ii) acting as a honeypot to catch adversarial example attacks, and iii) verifying data deletion requested by the data contributor. Overall, the research on defense is far behind the attack, and there is no single defense that can prevent all types of backdoor attacks.
1 code implementation • 30 Mar 2020 • Yansong Gao, Minki Kim, Sharif Abuadbba, Yeonjae Kim, Chandra Thapa, Kyuyeon Kim, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal
For learning performance, which is specified by the model accuracy and convergence speed metrics, we empirically evaluate both FL and SplitNN under different types of data distributions such as imbalanced and non-independent and identically distributed (non-IID) data.
1 code implementation • 16 Mar 2020 • Sharif Abuadbba, Kyuyeon Kim, Minki Kim, Chandra Thapa, Seyit A. Camtepe, Yansong Gao, Hyoungshick Kim, Surya Nepal
We observed that the 1D CNN model under split learning can achieve the same accuracy of 98. 9\% like the original (non-split) model.
no code implementations • ICML 2020 • Yansong Gao, Pratik Chaudhari
This paper employs a formal connection of machine learning with thermodynamics to characterize the quality of learnt representations for transfer learning.
3 code implementations • 23 Nov 2019 • Yansong Gao, Yeonjae Kim, Bao Gia Doan, Zhi Zhang, Gongxuan Zhang, Surya Nepal, Damith C. Ranasinghe, Hyoungshick Kim
In particular, for vision tasks, we can always achieve a 0% FRR and FAR.
Cryptography and Security
no code implementations • 19 Oct 2019 • Lu Yang, Yansong Gao, Junqing Zhang, Seyit Camtepe, Dhammika Jayalath
Unfortunately, there is no experimental validation for communications environments when there are large-scale and small-scale fading effects.
no code implementations • 1 Jun 2019 • Yansong Gao, Jie Zhang
Recently, [Deng, Gao, Zhang 2017] show that when the agents' preferences are drawn from a uniform distribution, its \textit{average-case approximation ratio} is upper bounded by 3. 718.
4 code implementations • 18 Feb 2019 • Yansong Gao, Chang Xu, Derui Wang, Shiping Chen, Damith C. Ranasinghe, Surya Nepal
Since the trojan trigger is a secret guarded and exploited by the attacker, detecting such trojan inputs is a challenge, especially at run-time when models are in active operation.
Cryptography and Security
no code implementations • 19 May 2018 • Yansong Gao, Yang Su, Lei Xu, Damith C. Ranasinghe
A Physical unclonable functions (PUF), alike a fingerprint, exploits manufacturing randomness to endow each physical item with a unique identifier.
Cryptography and Security
no code implementations • 20 Jun 2017 • Yansong Gao, Said F. Al-Sarawi, Derek Abbott, Ahmad-Reza Sadeghi, Damith C. Ranasinghe
Physical unclonable functions (PUFs), as hardware security primitives, exploit manufacturing randomness to extract hardware instance-specific secrets.
Cryptography and Security