Search Results for author: Ren Pang

Found 12 papers, 7 papers with code

On the Difficulty of Defending Contrastive Learning against Backdoor Attacks

no code implementations14 Dec 2023 Changjiang Li, Ren Pang, Bochuan Cao, Zhaohan Xi, Jinghui Chen, Shouling Ji, Ting Wang

Recent studies have shown that contrastive learning, like supervised learning, is highly vulnerable to backdoor attacks wherein malicious functions are injected into target models, only to be activated by specific triggers.

Contrastive Learning

Model Extraction Attacks Revisited

no code implementations8 Dec 2023 Jiacheng Liang, Ren Pang, Changjiang Li, Ting Wang

Model extraction (ME) attacks represent one major threat to Machine-Learning-as-a-Service (MLaaS) platforms by ``stealing'' the functionality of confidential machine-learning models through querying black-box APIs.

Model extraction

On the Security Risks of Knowledge Graph Reasoning

1 code implementation3 May 2023 Zhaohan Xi, Tianyu Du, Changjiang Li, Ren Pang, Shouling Ji, Xiapu Luo, Xusheng Xiao, Fenglong Ma, Ting Wang

Knowledge graph reasoning (KGR) -- answering complex logical queries over large knowledge graphs -- represents an important artificial intelligence task, entailing a range of applications (e. g., cyber threat hunting).

Knowledge Graphs

Neural Architectural Backdoors

no code implementations21 Oct 2022 Ren Pang, Changjiang Li, Zhaohan Xi, Shouling Ji, Ting Wang

This paper asks the intriguing question: is it possible to exploit neural architecture search (NAS) as a new attack vector to launch previously improbable attacks?

Neural Architecture Search

An Embarrassingly Simple Backdoor Attack on Self-supervised Learning

3 code implementations ICCV 2023 Changjiang Li, Ren Pang, Zhaohan Xi, Tianyu Du, Shouling Ji, Yuan YAO, Ting Wang

As a new paradigm in machine learning, self-supervised learning (SSL) is capable of learning high-quality representations of complex data without relying on labels.

Adversarial Robustness Backdoor Attack +2

Reasoning over Multi-view Knowledge Graphs

no code implementations27 Sep 2022 Zhaohan Xi, Ren Pang, Changjiang Li, Tianyu Du, Shouling Ji, Fenglong Ma, Ting Wang

(ii) It supports complex logical queries with varying relation and view constraints (e. g., with complex topology and/or from multiple views); (iii) It scales up to KGs of large sizes (e. g., millions of facts) and fine-granular views (e. g., dozens of views); (iv) It generalizes to query structures and KG views that are unobserved during training.

Knowledge Graphs Representation Learning

On the Security Risks of AutoML

1 code implementation12 Oct 2021 Ren Pang, Zhaohan Xi, Shouling Ji, Xiapu Luo, Ting Wang

Neural Architecture Search (NAS) represents an emerging machine learning (ML) paradigm that automatically searches for models tailored to given tasks, which greatly simplifies the development of ML systems and propels the trend of ML democratization.

Model Poisoning Neural Architecture Search

i-Algebra: Towards Interactive Interpretability of Deep Neural Networks

no code implementations22 Jan 2021 Xinyang Zhang, Ren Pang, Shouling Ji, Fenglong Ma, Ting Wang

Providing explanations for deep neural networks (DNNs) is essential for their use in domains wherein the interpretability of decisions is a critical prerequisite.

TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors

1 code implementation16 Dec 2020 Ren Pang, Zheng Zhang, Xiangshan Gao, Zhaohan Xi, Shouling Ji, Peng Cheng, Xiapu Luo, Ting Wang

To bridge this gap, we design and implement TROJANZOO, the first open-source platform for evaluating neural backdoor attacks/defenses in a unified, holistic, and practical manner.

Graph Backdoor

2 code implementations21 Jun 2020 Zhaohan Xi, Ren Pang, Shouling Ji, Ting Wang

One intriguing property of deep neural networks (DNNs) is their inherent vulnerability to backdoor attacks -- a trojan model responds to trigger-embedded inputs in a highly predictable manner while functioning normally otherwise.

Backdoor Attack Descriptive +3

AdvMind: Inferring Adversary Intent of Black-Box Attacks

1 code implementation16 Jun 2020 Ren Pang, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang

Deep neural networks (DNNs) are inherently susceptible to adversarial attacks even under black-box settings, in which the adversary only has query access to the target models.

A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models

1 code implementation5 Nov 2019 Ren Pang, Hua Shen, Xinyang Zhang, Shouling Ji, Yevgeniy Vorobeychik, Xiapu Luo, Alex Liu, Ting Wang

Specifically, (i) we develop a new attack model that jointly optimizes adversarial inputs and poisoned models; (ii) with both analytical and empirical evidence, we reveal that there exist intriguing "mutual reinforcement" effects between the two attack vectors -- leveraging one vector significantly amplifies the effectiveness of the other; (iii) we demonstrate that such effects enable a large design spectrum for the adversary to enhance the existing attacks that exploit both vectors (e. g., backdoor attacks), such as maximizing the attack evasiveness with respect to various detection methods; (iv) finally, we discuss potential countermeasures against such optimized attacks and their technical challenges, pointing to several promising research directions.

Cannot find the paper you are looking for? You can Submit a new open access paper.