Search Results for author: Yansong Gao

Found 32 papers, 9 papers with code

Horizontal Class Backdoor to Deep Learning

no code implementations1 Oct 2023 Hua Ma, Shang Wang, Yansong Gao

Only when the trigger is concurrently presented with the HC innocuous feature, can the backdoor be effectively activated.

object-detection Object Detection +1

Fast Diffusion Probabilistic Model Sampling through the lens of Backward Error Analysis

no code implementations22 Apr 2023 Yansong Gao, Zhihong Pan, Xin Zhou, Le Kang, Pratik Chaudhari

This work analyzes how the backward error affects the diffusion ODEs and the sample quality in DDPMs.


Vertical Federated Learning: Taxonomies, Threats, and Prospects

no code implementations3 Feb 2023 Qun Li, Chandra Thapa, Lawrence Ong, Yifeng Zheng, Hua Ma, Seyit A. Camtepe, Anmin Fu, Yansong Gao

In a number of practical scenarios, VFL is more relevant than HFL as different companies (e. g., bank and retailer) hold different features (e. g., credit history and shopping history) for the same set of customers.

Federated Learning

Tracking Dataset IP Use in Deep Neural Networks

no code implementations24 Nov 2022 Seonhye Park, Alsharif Abuadbba, Shuo Wang, Kristen Moore, Yansong Gao, Hyoungshick Kim, Surya Nepal

In this work, we propose a novel DNN fingerprinting technique dubbed DEEPTASTER to prevent a new attack scenario in which a victim's data is stolen to build a suspect model.

Data Augmentation Transfer Learning

TransCAB: Transferable Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World

1 code implementation6 Sep 2022 Hua Ma, Yinshan Li, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Anmin Fu, Said F. Al-Sarawi, Nepal Surya, Derek Abbott

We observe that the backdoor effect of both misclassification and the cloaking are robustly achieved in the wild when the backdoor is activated with inconspicuously natural physical triggers.

Event Detection Image Classification +3

Binarizing Split Learning for Data Privacy Enhancement and Computation Reduction

no code implementations10 Jun 2022 Ngoc Duy Pham, Alsharif Abuadbba, Yansong Gao, Tran Khoa Phan, Naveen Chilamkurti

Experimental results with different datasets have affirmed the advantages of the B-SL models compared with several benchmark models.

CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences

no code implementations31 May 2022 Shang Wang, Yansong Gao, Anmin Fu, Zhi Zhang, Yuqing Zhang, Willy Susilo, Dongxi Liu

Compared with a representative SSBA as a baseline ($SSBA_{Base}$), $CASSOCK$-based attacks have significantly advanced the attack performance, i. e., higher ASR and lower FPR with comparable CDA (clean data accuracy).

On Scheduling Mechanisms Beyond the Worst Case

no code implementations14 Apr 2022 Yansong Gao, Jie Zhang

That is, mechanism K is pointwise better than mechanism P. Next, for each task $j$, when machines' execution costs $t_i^j$ are independent and identically drawn from a task-specific distribution $F^j(t)$, we show that the average-case approximation ratio of mechanism K converges to a constant.


Towards A Critical Evaluation of Robustness for Deep Learning Backdoor Countermeasures

no code implementations13 Apr 2022 Huming Qiu, Hua Ma, Zhi Zhang, Alsharif Abuadbba, Wei Kang, Anmin Fu, Yansong Gao

Since Deep Learning (DL) backdoor attacks have been revealed as one of the most insidious adversarial attacks, a number of countermeasures have been developed with certain assumptions defined in their respective threat models.

PPA: Preference Profiling Attack Against Federated Learning

no code implementations10 Feb 2022 Chunyi Zhou, Yansong Gao, Anmin Fu, Kai Chen, Zhiyang Dai, Zhi Zhang, Minhui Xue, Yuqing Zhang

By observing a user model's gradient sensitivity to a class, PPA can profile the sample proportion of the class in the user's local dataset, and thus the user's preference of the class is exposed.

Federated Learning Inference Attack

Deep Reference Priors: What is the best way to pretrain a model?

2 code implementations pproximateinference AABI Symposium 2022 Yansong Gao, Rahul Ramesh, Pratik Chaudhari

Such priors enable the task to maximally affect the Bayesian posterior, e. g., reference priors depend upon the number of samples available for learning the task and for very small sample sizes, the prior puts more probability mass on low-complexity models in the hypothesis space.

Semi-Supervised Image Classification Transfer Learning

NTD: Non-Transferability Enabled Backdoor Detection

no code implementations22 Nov 2021 Yinshan Li, Hua Ma, Zhi Zhang, Yansong Gao, Alsharif Abuadbba, Anmin Fu, Yifeng Zheng, Said F. Al-Sarawi, Derek Abbott

A backdoor deep learning (DL) model behaves normally upon clean inputs but misbehaves upon trigger inputs as the backdoor attacker desires, posing severe consequences to DL model deployments.

Face Recognition Traffic Sign Recognition

RBNN: Memory-Efficient Reconfigurable Deep Binary Neural Network with IP Protection for Internet of Things

no code implementations9 May 2021 Huming Qiu, Hua Ma, Zhi Zhang, Yifeng Zheng, Anmin Fu, Pan Zhou, Yansong Gao, Derek Abbott, Said F. Al-Sarawi

To this end, a 1-bit quantized DNN model or deep binary neural network maximizes the memory efficiency, where each parameter in a BNN model has only 1-bit.


Evaluation and Optimization of Distributed Machine Learning Techniques for Internet of Things

1 code implementation3 Mar 2021 Yansong Gao, Minki Kim, Chandra Thapa, Sharif Abuadbba, Zhi Zhang, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal

Federated learning (FL) and split learning (SL) are state-of-the-art distributed machine learning techniques to enable machine learning training without accessing raw data on clients or end devices.

BIG-bench Machine Learning Federated Learning

An Information-Geometric Distance on the Space of Tasks

1 code implementation NeurIPS Workshop DL-IG 2020 Yansong Gao, Pratik Chaudhari

Using tools in information geometry, the distance is defined to be the length of the shortest weight trajectory on a Riemannian manifold as a classifier is fitted on an interpolated task.

Image Classification Transfer Learning

Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks

no code implementations8 Oct 2020 Bedeuro Kim, Alsharif Abuadbba, Yansong Gao, Yifeng Zheng, Muhammad Ejaz Ahmed, Hyoungshick Kim, Surya Nepal

To corroborate the efficiency of Decamouflage, we have also measured its run-time overhead on a personal PC with an i5 CPU and found that Decamouflage can detect image-scaling attacks in milliseconds.


VFL: A Verifiable Federated Learning with Privacy-Preserving for Big Data in Industrial IoT

no code implementations27 Jul 2020 Anmin Fu, Xianglong Zhang, Naixue Xiong, Yansong Gao, Huaqun Wang

If no more than n-2 of n participants collude with the aggregation server, VFL could guarantee the encrypted gradients of other participants not being inverted.

Cryptography and Security E.3; I.2.11

Evaluation of Federated Learning in Phishing Email Detection

no code implementations27 Jul 2020 Chandra Thapa, Jun Wen Tang, Alsharif Abuadbba, Yansong Gao, Seyit Camtepe, Surya Nepal, Mahathir Almashor, Yifeng Zheng

For a fixed total email dataset, the global RNN based model suffers by a 1. 8% accuracy drop when increasing organizational counts from 2 to 10.

Distributed Computing Federated Learning +2

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review

1 code implementation21 Jul 2020 Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim

We have also reviewed the flip side of backdoor attacks, which are explored for i) protecting intellectual property of deep learning models, ii) acting as a honeypot to catch adversarial example attacks, and iii) verifying data deletion requested by the data contributor. Overall, the research on defense is far behind the attack, and there is no single defense that can prevent all types of backdoor attacks.

End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things

1 code implementation30 Mar 2020 Yansong Gao, Minki Kim, Sharif Abuadbba, Yeonjae Kim, Chandra Thapa, Kyuyeon Kim, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal

For learning performance, which is specified by the model accuracy and convergence speed metrics, we empirically evaluate both FL and SplitNN under different types of data distributions such as imbalanced and non-independent and identically distributed (non-IID) data.

Federated Learning

Can We Use Split Learning on 1D CNN Models for Privacy Preserving Training?

1 code implementation16 Mar 2020 Sharif Abuadbba, Kyuyeon Kim, Minki Kim, Chandra Thapa, Seyit A. Camtepe, Yansong Gao, Hyoungshick Kim, Surya Nepal

We observed that the 1D CNN model under split learning can achieve the same accuracy of 98. 9\% like the original (non-split) model.

Privacy Preserving

A Free-Energy Principle for Representation Learning

no code implementations ICML 2020 Yansong Gao, Pratik Chaudhari

This paper employs a formal connection of machine learning with thermodynamics to characterize the quality of learnt representations for transfer learning.

Classification General Classification +3

A Channel Perceiving Attack on Long-Range Key Generation and Its Countermeasure

no code implementations19 Oct 2019 Lu Yang, Yansong Gao, Junqing Zhang, Seyit Camtepe, Dhammika Jayalath

Unfortunately, there is no experimental validation for communications environments when there are large-scale and small-scale fading effects.

Average-case Analysis of the Assignment Problem with Independent Preferences

no code implementations1 Jun 2019 Yansong Gao, Jie Zhang

Recently, [Deng, Gao, Zhang 2017] show that when the agents' preferences are drawn from a uniform distribution, its \textit{average-case approximation ratio} is upper bounded by 3. 718.

Open-Ended Question Answering

STRIP: A Defence Against Trojan Attacks on Deep Neural Networks

3 code implementations18 Feb 2019 Yansong Gao, Chang Xu, Derui Wang, Shiping Chen, Damith C. Ranasinghe, Surya Nepal

Since the trojan trigger is a secret guarded and exploited by the attacker, detecting such trojan inputs is a challenge, especially at run-time when models are in active operation.

Cryptography and Security

Lightweight (Reverse) Fuzzy Extractor with Multiple Referenced PUF Responses

no code implementations19 May 2018 Yansong Gao, Yang Su, Lei Xu, Damith C. Ranasinghe

A Physical unclonable functions (PUF), alike a fingerprint, exploits manufacturing randomness to endow each physical item with a unique identifier.

Cryptography and Security

Modeling Attack Resilient Reconfigurable Latent Obfuscation Technique for PUF based Lightweight Authentication

no code implementations20 Jun 2017 Yansong Gao, Said F. Al-Sarawi, Derek Abbott, Ahmad-Reza Sadeghi, Damith C. Ranasinghe

Physical unclonable functions (PUFs), as hardware security primitives, exploit manufacturing randomness to extract hardware instance-specific secrets.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.