Search Results for author: Shouling Ji

Found 59 papers, 25 papers with code

FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases

no code implementations28 Feb 2023 Chong Fu, Xuhong Zhang, Shouling Ji, Ting Wang, Peng Lin, Yanghe Feng, Jianwei Yin

Thus, in this paper, we propose FreeEagle, the first data-free backdoor detection method that can effectively detect complex backdoor attacks on deep neural networks, without relying on the access to any clean samples or samples with the trigger.

Backdoor Attack

TextDefense: Adversarial Text Detection based on Word Importance Entropy

no code implementations12 Feb 2023 Lujia Shen, Xuhong Zhang, Shouling Ji, Yuwen Pu, Chunpeng Ge, Xing Yang, Yanghe Feng

TextDefense differs from previous approaches, where it utilizes the target model for detection and thus is attack type agnostic.

Adversarial Text

All You Need Is Hashing: Defending Against Data Reconstruction Attack in Vertical Federated Learning

no code implementations1 Dec 2022 Pengyu Qiu, Xuhong Zhang, Shouling Ji, Yuwen Pu, Ting Wang

Vertical federated learning is a trending solution for multi-party collaboration in training machine learning models.

Federated Learning

Hijack Vertical Federated Learning Models with Adversarial Embedding

no code implementations1 Dec 2022 Pengyu Qiu, Xuhong Zhang, Shouling Ji, Changjiang Li, Yuwen Pu, Xing Yang, Ting Wang

Vertical federated learning (VFL) is an emerging paradigm that enables collaborators to build machine learning models together in a distributed fashion.

Federated Learning

Neural Architectural Backdoors

no code implementations21 Oct 2022 Ren Pang, Changjiang Li, Zhaohan Xi, Shouling Ji, Ting Wang

This paper asks the intriguing question: is it possible to exploit neural architecture search (NAS) as a new attack vector to launch previously improbable attacks?

Neural Architecture Search

Demystifying Self-supervised Trojan Attacks

no code implementations13 Oct 2022 Changjiang Li, Ren Pang, Zhaohan Xi, Tianyu Du, Shouling Ji, Yuan YAO, Ting Wang

We explore this question in the context of trojan attacks by showing that SSL is comparably vulnerable as supervised learning to trojan attacks.

Adversarial Robustness Self-Supervised Learning

Label Inference Attacks Against Vertical Federated Learning

2 code implementations USENIX Security 22 2022 Chong Fu, Xuhong Zhang, Shouling Ji, Jinyin Chen, Jingzheng Wu, Shanqing Guo, Jun Zhou, Alex X. Liu, Ting Wang

However, we discover that the bottom model structure and the gradient update mechanism of VFL can be exploited by a malicious participant to gain the power to infer the privately owned labels.

Federated Learning

Reasoning over Multi-view Knowledge Graphs

no code implementations27 Sep 2022 Zhaohan Xi, Ren Pang, Changjiang Li, Tianyu Du, Shouling Ji, Fenglong Ma, Ting Wang

(ii) It supports complex logical queries with varying relation and view constraints (e. g., with complex topology and/or from multiple views); (iii) It scales up to KGs of large sizes (e. g., millions of facts) and fine-granular views (e. g., dozens of views); (iv) It generalizes to query structures and KG views that are unobserved during training.

Knowledge Graphs Representation Learning

VeriFi: Towards Verifiable Federated Unlearning

no code implementations25 May 2022 Xiangshan Gao, Xingjun Ma, Jingyi Wang, Youcheng Sun, Bo Li, Shouling Ji, Peng Cheng, Jiming Chen

One desirable property for FL is the implementation of the right to be forgotten (RTBF), i. e., a leaving participant has the right to request to delete its private data from the global model.

Federated Learning

Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision Settings

no code implementations7 Apr 2022 Yuhao Mao, Chong Fu, Saizhuo Wang, Shouling Ji, Xuhong Zhang, Zhenguang Liu, Jun Zhou, Alex X. Liu, Raheem Beyah, Ting Wang

To bridge this critical gap, we conduct the first large-scale systematic empirical study of transfer attacks against major cloud-based MLaaS platforms, taking the components of a real transfer attack into account.

Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era

no code implementations22 Feb 2022 Changjiang Li, Li Wang, Shouling Ji, Xuhong Zhang, Zhaohan Xi, Shanqing Guo, Ting Wang

Facial Liveness Verification (FLV) is widely used for identity authentication in many security-sensitive domains and offered as Platform-as-a-Service (PaaS) by leading cloud vendors.

DeepFake Detection Face Swapping

GIFT: Graph-guIded Feature Transfer for Cold-Start Video Click-Through Rate Prediction

1 code implementation21 Feb 2022 Sihao Hu, Yi Cao, Yu Gong, Zhao Li, Yazheng Yang, Qingwen Liu, Shouling Ji

Specifically, we establish a heterogeneous graph that contains physical and semantic linkages to guide the feature transfer process from warmed-up video to cold-start videos.

Click-Through Rate Prediction

Investigating Pose Representations and Motion Contexts Modeling for 3D Motion Prediction

1 code implementation30 Dec 2021 Zhenguang Liu, Shuang Wu, Shuyuan Jin, Shouling Ji, Qi Liu, Shijian Lu, Li Cheng

One aspect that has been obviated so far, is the fact that how we represent the skeletal pose has a critical impact on the prediction results.

motion prediction

NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification

1 code implementation25 Dec 2021 Haibin Zheng, Zhiqing Chen, Tianyu Du, Xuhong Zhang, Yao Cheng, Shouling Ji, Jingyi Wang, Yue Yu, Jinyin Chen

To overcome the challenges, we propose NeuronFair, a new DNN fairness testing framework that differs from previous work in several key aspects: (1) interpretable - it quantitatively interprets DNNs' fairness violations for the biased decision; (2) effective - it uses the interpretation results to guide the generation of more diverse instances in less time; (3) generic - it can handle both structured and unstructured data.

Fairness

NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks

no code implementations24 Dec 2021 Ruoxi Chen, Haibo Jin, Jinyin Chen, Haibin Zheng, Yue Yu, Shouling Ji

From the perspective of image feature space, some of them cannot reach satisfying results due to the shift of features.

Adversarial Attacks against Windows PE Malware Detection: A Survey of the State-of-the-Art

1 code implementation23 Dec 2021 Xiang Ling, Lingfei Wu, Jiangyu Zhang, Zhenqing Qu, Wei Deng, Xiang Chen, Yaguan Qian, Chunming Wu, Shouling Ji, Tianyue Luo, Jingzheng Wu, Yanjun Wu

Then, we conduct a comprehensive and systematic review to categorize the state-of-the-art adversarial attacks against PE malware detection, as well as corresponding defenses to increase the robustness of Windows PE malware detection.

Adversarial Attack Malware Detection +2

On the Security Risks of AutoML

1 code implementation12 Oct 2021 Ren Pang, Zhaohan Xi, Shouling Ji, Xiapu Luo, Ting Wang

Neural Architecture Search (NAS) represents an emerging machine learning (ML) paradigm that automatically searches for models tailored to given tasks, which greatly simplifies the development of ML systems and propels the trend of ML democratization.

Model Poisoning Neural Architecture Search

GGT: Graph-Guided Testing for Adversarial Sample Detection of Deep Neural Network

no code implementations9 Jul 2021 Zuohui Chen, Renxuan Wang, Jingyang Xiang, Yue Yu, Xin Xia, Shouling Ji, Qi Xuan, Xiaoniu Yang

Deep Neural Networks (DNN) are known to be vulnerable to adversarial samples, the detection of which is crucial for the wide application of these DNN models.

Fine-Grained Fashion Similarity Prediction by Attribute-Specific Embedding Learning

1 code implementation6 Apr 2021 Jianfeng Dong, Zhe Ma, Xiaofeng Mao, Xun Yang, Yuan He, Richang Hong, Shouling Ji

In this similarity paradigm, one should pay more attention to the similarity in terms of a specific design/attribute between fashion items.

EfficientTDNN: Efficient Architecture Search for Speaker Recognition

1 code implementation25 Mar 2021 Rui Wang, Zhihua Wei, Haoran Duan, Shouling Ji, Yang Long, Zhen Hong

Compared with hand-designed approaches, neural architecture search (NAS) appears as a practical technique in automating the manual architecture design process and has attracted increasing interest in spoken language processing tasks such as speaker recognition.

Data Augmentation Network Pruning +2

Aggregated Multi-GANs for Controlled 3D Human Motion Prediction

no code implementations17 Mar 2021 Zhenguang Liu, Kedi Lyu, Shuang Wu, Haipeng Chen, Yanbin Hao, Shouling Ji

Our method is compelling in that it enables manipulable motion prediction across activity types and allows customization of the human movement in a variety of fine-grained ways.

Human motion prediction motion prediction

Enhancing Model Robustness By Incorporating Adversarial Knowledge Into Semantic Representation

no code implementations23 Feb 2021 Jinfeng Li, Tianyu Du, Xiangyu Liu, Rong Zhang, Hui Xue, Shouling Ji

Extensive experiments on two real-world tasks show that AdvGraph exhibits better performance compared with previous work: (i) effective - it significantly strengthens the model robustness even under the adaptive attacks setting without negative impact on model performance over legitimate input; (ii) generic - its key component, i. e., the representation of connotative adversarial knowledge is task-agnostic, which can be reused in any Chinese-based NLP models without retraining; and (iii) efficient - it is a light-weight defense with sub-linear computational complexity, which can guarantee the efficiency required in practical scenarios.

Hierarchical Similarity Learning for Language-based Product Image Retrieval

1 code implementation18 Feb 2021 Zhe Ma, Fenghao Liu, Jianfeng Dong, Xiaoye Qu, Yuan He, Shouling Ji

In this paper, we focus on the cross-modal similarity measurement, and propose a novel Hierarchical Similarity Learning (HSL) network.

Image Retrieval Retrieval +1

Towards Speeding up Adversarial Training in Latent Spaces

no code implementations1 Feb 2021 Yaguan Qian, Qiqi Shao, Tengteng Yao, Bin Wang, Shouling Ji, Shaoning Zeng, Zhaoquan Gu, Wassim Swaileh

Adversarial training is wildly considered as one of the most effective way to defend against adversarial examples.

i-Algebra: Towards Interactive Interpretability of Deep Neural Networks

no code implementations22 Jan 2021 Xinyang Zhang, Ren Pang, Shouling Ji, Fenglong Ma, Ting Wang

Providing explanations for deep neural networks (DNNs) is essential for their use in domains wherein the interpretability of decisions is a critical prerequisite.

Multi-level Graph Matching Networks for Deep and Robust Graph Similarity Learning

no code implementations1 Jan 2021 Xiang Ling, Lingfei Wu, Saizhuo Wang, Tengfei Ma, Fangli Xu, Alex X. Liu, Chunming Wu, Shouling Ji

The proposed MGMN model consists of a node-graph matching network for effectively learning cross-level interactions between nodes of a graph and the other whole graph, and a siamese graph neural network to learn global-level interactions between two graphs.

Graph Classification Graph Matching +1

TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors

1 code implementation16 Dec 2020 Ren Pang, Zheng Zhang, Xiangshan Gao, Zhaohan Xi, Shouling Ji, Peng Cheng, Xiapu Luo, Ting Wang

To bridge this gap, we design and implement TROJANZOO, the first open-source platform for evaluating neural backdoor attacks/defenses in a unified, holistic, and practical manner.

Visually Imperceptible Adversarial Patch Attacks on Digital Images

no code implementations2 Dec 2020 Yaguan Qian, Jiamin Wang, Bin Wang, Shaoning Zeng, Zhaoquan Gu, Shouling Ji, Wassim Swaileh

With this soft mask, we develop a new loss function with inverse temperature to search for optimal perturbations in CFR.

Exploiting Heterogeneous Graph Neural Networks with Latent Worker/Task Correlation Information for Label Aggregation in Crowdsourcing

no code implementations25 Oct 2020 Hanlu Wu, Tengfei Ma, Lingfei Wu, Shouling Ji

Besides, we exploit the unknown latent interaction between the same type of nodes (workers or tasks) by adding a homogeneous attention layer in the graph neural networks.

Deep Graph Matching and Searching for Semantic Code Retrieval

no code implementations24 Oct 2020 Xiang Ling, Lingfei Wu, Saizhuo Wang, Gaoning Pan, Tengfei Ma, Fangli Xu, Alex X. Liu, Chunming Wu, Shouling Ji

To this end, we first represent both natural language query texts and programming language code snippets with the unified graph-structured data, and then use the proposed graph matching and searching model to retrieve the best matching code snippet.

Graph Matching Retrieval

UNIFUZZ: A Holistic and Pragmatic Metrics-Driven Platform for Evaluating Fuzzers

1 code implementation5 Oct 2020 Yuwei Li, Shouling Ji, Yuan Chen, Sizhuang Liang, Wei-Han Lee, Yueyao Chen, Chenyang Lyu, Chunming Wu, Raheem Beyah, Peng Cheng, Kangjie Lu, Ting Wang

We hope that our findings can shed light on reliable fuzzing evaluation, so that we can discover promising fuzzing primitives to effectively facilitate fuzzer designs in the future.

Cryptography and Security

Unsupervised Reference-Free Summary Quality Evaluation via Contrastive Learning

1 code implementation EMNLP 2020 Hanlu Wu, Tengfei Ma, Lingfei Wu, Tariro Manyumwa, Shouling Ji

Experiments on Newsroom and CNN/Daily Mail demonstrate that our new evaluation method outperforms other metrics even without reference summaries.

Contrastive Learning Document Summarization +1

Trojaning Language Models for Fun and Profit

1 code implementation1 Aug 2020 Xinyang Zhang, Zheng Zhang, Shouling Ji, Ting Wang

Recent years have witnessed the emergence of a new paradigm of building natural language processing (NLP) systems: general-purpose, pre-trained language models (LMs) are composed with simple downstream models and fine-tuned for a variety of NLP tasks.

Question Answering Specificity +1

Multilevel Graph Matching Networks for Deep Graph Similarity Learning

1 code implementation8 Jul 2020 Xiang Ling, Lingfei Wu, Saizhuo Wang, Tengfei Ma, Fangli Xu, Alex X. Liu, Chunming Wu, Shouling Ji

In particular, the proposed MGMN consists of a node-graph matching network for effectively learning cross-level interactions between each node of one graph and the other whole graph, and a siamese graph neural network to learn global-level interactions between two input graphs.

Graph Classification Graph Matching +3

Graph Backdoor

1 code implementation21 Jun 2020 Zhaohan Xi, Ren Pang, Shouling Ji, Ting Wang

One intriguing property of deep neural networks (DNNs) is their inherent vulnerability to backdoor attacks -- a trojan model responds to trigger-embedded inputs in a highly predictable manner while functioning normally otherwise.

Backdoor Attack General Classification +2

AdvMind: Inferring Adversary Intent of Black-Box Attacks

1 code implementation16 Jun 2020 Ren Pang, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang

Deep neural networks (DNNs) are inherently susceptible to adversarial attacks even under black-box settings, in which the adversary only has query access to the target models.

Efficient Global String Kernel with Random Features: Beyond Counting Substructures

no code implementations25 Nov 2019 Lingfei Wu, Ian En-Hsu Yen, Siyu Huo, Liang Zhao, Kun Xu, Liang Ma, Shouling Ji, Charu Aggarwal

In this paper, we present a new class of global string kernels that aims to (i) discover global properties hidden in the strings through global alignments, (ii) maintain positive-definiteness of the kernel, without introducing a diagonal dominant kernel matrix, and (iii) have a training cost linear with respect to not only the length of the string but also the number of training string samples.

A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models

1 code implementation5 Nov 2019 Ren Pang, Hua Shen, Xinyang Zhang, Shouling Ji, Yevgeniy Vorobeychik, Xiapu Luo, Alex Liu, Ting Wang

Specifically, (i) we develop a new attack model that jointly optimizes adversarial inputs and poisoned models; (ii) with both analytical and empirical evidence, we reveal that there exist intriguing "mutual reinforcement" effects between the two attack vectors -- leveraging one vector significantly amplifies the effectiveness of the other; (iii) we demonstrate that such effects enable a large design spectrum for the adversary to enhance the existing attacks that exploit both vectors (e. g., backdoor attacks), such as maximizing the attack evasiveness with respect to various detection methods; (iv) finally, we discuss potential countermeasures against such optimized attacks and their technical challenges, pointing to several promising research directions.

Hierarchical Graph Matching Networks for Deep Graph Similarity Learning

no code implementations25 Sep 2019 Xiang Ling, Lingfei Wu, Saizhuo Wang, Tengfei Ma, Fangli Xu, Chunming Wu, Shouling Ji

The proposed HGMN model consists of a multi-perspective node-graph matching network for effectively learning cross-level interactions between parts of a graph and a whole graph, and a siamese graph neural network for learning global-level interactions between two graphs.

Graph Matching Graph Similarity

Provable Defenses against Spatially Transformed Adversarial Inputs: Impossibility and Possibility Results

no code implementations ICLR 2019 Xinyang Zhang, Yifan Huang, Chanh Nguyen, Shouling Ji, Ting Wang

On the possibility side, we show that it is still feasible to construct adversarial training methods to significantly improve the resilience of networks against adversarial inputs over empirical datasets.

SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems

no code implementations23 Jan 2019 Tianyu Du, Shouling Ji, Jinfeng Li, Qinchen Gu, Ting Wang, Raheem Beyah

Despite their immense popularity, deep learning-based acoustic systems are inherently vulnerable to adversarial attacks, wherein maliciously crafted audios trigger target systems to misbehave.

Cryptography and Security

A Truthful FPTAS Mechanism for Emergency Demand Response in Colocation Data Centers

1 code implementation10 Jan 2019 Jian-hai Chen, Deshi Ye, Shouling Ji, Qinming He, Yang Xiang, Zhenguang Liu

Next, we prove that our mechanism is an FPTAS, i. e., it can be approximated within $1 + \epsilon$ for any given $\epsilon > 0$, while the running time of our mechanism is polynomial in $n$ and $1/\epsilon$, where $n$ is the number of tenants in the datacenter.

Computer Science and Game Theory

V-Fuzz: Vulnerability-Oriented Evolutionary Fuzzing

no code implementations4 Jan 2019 Yuwei Li, Shouling Ji, Chenyang Lv, Yu-An Chen, Jian-hai Chen, Qinchen Gu, Chunming Wu

Given a binary program to V-Fuzz, the vulnerability prediction model will give a prior estimation on which parts of the software are more likely to be vulnerable.

Cryptography and Security

Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical Study

no code implementations4 Jan 2019 Xurong Li, Shouling Ji, Meng Han, Juntao Ji, Zhenyu Ren, Yushan Liu, Chunming Wu

Through the comprehensive evaluations on five major cloud platforms: AWS, Azure, Google Cloud, Baidu Cloud, and Alibaba Cloud, we demonstrate that our image processing based attacks can reach a success rate of approximately 100%, and the semantic segmentation based attacks have a success rate over 90% among different detection services, such as violence, politician, and pornography detection.

General Classification Image Classification +2

TextBugger: Generating Adversarial Text Against Real-world Applications

1 code implementation13 Dec 2018 Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, Ting Wang

Deep Learning-based Text Understanding (DLTU) is the backbone technique behind various applications, including question answering, machine translation, and text classification.

Adversarial Text Machine Translation +6

Interpretable Deep Learning under Fire

no code implementations3 Dec 2018 Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, Ting Wang

The improved interpretability is believed to offer a sense of security by involving human in the decision-making process.

Decision Making

Model-Reuse Attacks on Deep Learning Systems

no code implementations2 Dec 2018 Yujie Ji, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang

By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference.

Cryptography and Security

SmartSeed: Smart Seed Generation for Efficient Fuzzing

no code implementations7 Jul 2018 Chenyang Lyu, Shouling Ji, Yuwei Li, Junfeng Zhou, Jian-hai Chen, Jing Chen

In total, our system discovers more than twice unique crashes and 5, 040 extra unique paths than the existing best seed selection strategy for the evaluated 12 applications.

Cryptography and Security

Differentially Private Releasing via Deep Generative Model (Technical Report)

2 code implementations5 Jan 2018 Xinyang Zhang, Shouling Ji, Ting Wang

Privacy-preserving releasing of complex data (e. g., image, text, audio) represents a long-standing challenge for the data mining research community.

Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.