Search Results for author: Yun Shen

Found 28 papers, 11 papers with code

FAKEPCD: Fake Point Cloud Detection via Source Attribution

no code implementations18 Dec 2023 Yiting Qu, Zhikun Zhang, Yun Shen, Michael Backes, Yang Zhang

Take the open-world attribution as an example, FAKEPCD attributes point clouds to known sources with an accuracy of 0. 82-0. 98 and to unknown sources with an accuracy of 0. 73-1. 00.

Attribute Cloud Detection

Comprehensive Assessment of Toxicity in ChatGPT

no code implementations3 Nov 2023 Boyang Zhang, Xinyue Shen, Wai Man Si, Zeyang Sha, Zeyuan Chen, Ahmed Salem, Yun Shen, Michael Backes, Yang Zhang

Moderating offensive, hateful, and toxic language has always been an important but challenging topic in the domain of safe use in NLP.

Prompt Backdoors in Visual Prompt Learning

no code implementations11 Oct 2023 Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang

Specifically, the VPPTaaS provider optimizes a visual prompt given downstream data, and downstream users can use this prompt together with the large pre-trained model for prediction.

Backdoor Attack

Composite Backdoor Attacks Against Large Language Models

1 code implementation11 Oct 2023 Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang

Such a Composite Backdoor Attack (CBA) is shown to be stealthier than implanting the same multiple trigger keys in only a single component.

Backdoor Attack

You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content

1 code implementation10 Aug 2023 Xinlei He, Savvas Zannettou, Yun Shen, Yang Zhang

We find that prompt learning achieves around 10\% improvement in the toxicity classification task compared to the baselines, while for the toxic span detection task we find better performance to the best baseline (0. 643 vs. 0. 640 in terms of $F_1$-score).

"Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models

1 code implementation7 Aug 2023 Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, Yang Zhang

The misuse of large language models (LLMs) has garnered significant attention from the general public and LLM vendors.

Community Detection

Generated Graph Detection

1 code implementation13 Jun 2023 Yihan Ma, Zhikun Zhang, Ning Yu, Xinlei He, Michael Backes, Yun Shen, Yang Zhang

Graph generative models become increasingly effective for data distribution approximation and data augmentation.

Data Augmentation Face Swapping +1

A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots

1 code implementation23 Feb 2023 Boyang Zhang, Xinlei He, Yun Shen, Tianhao Wang, Yang Zhang

Given the simplicity and effectiveness of the attack method, our study indicates scientific plots indeed constitute a valid side channel for model information stealing attacks.

valid

Backdoor Attacks Against Dataset Distillation

2 code implementations3 Jan 2023 Yugeng Liu, Zheng Li, Michael Backes, Yun Shen, Yang Zhang

A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset.

Backdoor Attack

Amplifying Membership Exposure via Data Poisoning

1 code implementation1 Nov 2022 Yufei Chen, Chao Shen, Yun Shen, Cong Wang, Yang Zhang

In this paper, we investigate the third type of exploitation of data poisoning - increasing the risks of privacy leakage of benign training samples.

Data Poisoning Overall - Test +1

Backdoor Attacks in the Supply Chain of Masked Image Modeling

no code implementations4 Oct 2022 Xinyue Shen, Xinlei He, Zheng Li, Yun Shen, Michael Backes, Yang Zhang

Different from previous work, we are the first to systematically threat modeling on SSL in every phase of the model supply chain, i. e., pre-training, release, and downstream phases.

Contrastive Learning Self-Supervised Learning

Cerberus: Exploring Federated Prediction of Security Events

no code implementations7 Sep 2022 Mohammad Naseri, Yufei Han, Enrico Mariconti, Yun Shen, Gianluca Stringhini, Emiliano De Cristofaro

Modern defenses against cyberattacks increasingly rely on proactive approaches, e. g., to predict the adversary's next actions based on past events.

Federated Learning

On the Privacy Risks of Cell-Based NAS Architectures

1 code implementation4 Sep 2022 Hai Huang, Zhikun Zhang, Yun Shen, Michael Backes, Qi Li, Yang Zhang

Existing studies on neural architecture search (NAS) mainly focus on efficiently and effectively searching for network architectures with better performance.

Neural Architecture Search

Finding MNEMON: Reviving Memories of Node Embeddings

no code implementations14 Apr 2022 Yun Shen, Yufei Han, Zhikun Zhang, Min Chen, Ting Yu, Michael Backes, Yang Zhang, Gianluca Stringhini

Previous security research efforts orbiting around graphs have been exclusively focusing on either (de-)anonymizing the graphs or understanding the security and privacy issues of graph neural networks.

Graph Embedding

Model Stealing Attacks Against Inductive Graph Neural Networks

1 code implementation15 Dec 2021 Yun Shen, Xinlei He, Yufei Han, Yang Zhang

Graph neural networks (GNNs), a new family of machine learning (ML) models, have been proposed to fully leverage graph data to build powerful applications.

Inference Attacks Against Graph Neural Networks

1 code implementation6 Oct 2021 Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, Yang Zhang

Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph.

Graph Classification Graph Embedding +2

Towards Understanding the Robustness Against Evasion Attack on Categorical Data

no code implementations ICLR 2022 Hongyan Bao, Yufei Han, Yujun Zhou, Yun Shen, Xiangliang Zhang

Characterizing and assessing the adversarial vulnerability of classification models with categorical input has been a practically important, while rarely explored research problem.

Classification

ANDR USPEX : Leveraging Graph Representation Learning to Predict Harmful App Installations on Mobile Devices

no code implementations9 Mar 2021 Yun Shen, Gianluca Stringhini

Unlike commodity anti-malware solutions on desktop systems, their Android counterparts run as sandboxed applications without root privileges and are limited by Android's permission system.

Graph Representation Learning Cryptography and Security

Understanding Worldwide Private Information Collection on Android

no code implementations25 Feb 2021 Yun Shen, Pierre-Antoine Vervier, Gianluca Stringhini

Mobile phones enable the collection of a wealth of private information, from unique identifiers (e. g., email addresses), to a user's location, to their text messages.

Mobile Security Cryptography and Security

Node-Level Membership Inference Attacks Against Graph Neural Networks

no code implementations10 Feb 2021 Xinlei He, Rui Wen, Yixin Wu, Michael Backes, Yun Shen, Yang Zhang

To fully utilize the information contained in graph data, a new family of machine learning (ML) models, namely graph neural networks (GNNs), has been introduced.

BIG-bench Machine Learning

Empowering the Edge Intelligence by Air-Ground Integrated Federated Learning in 6G Networks

no code implementations26 Jul 2020 Yuben Qu, Chao Dong, Jianchao Zheng, Qihui Wu, Yun Shen, Fan Wu, Alagan Anpalagan

Ubiquitous intelligence has been widely recognized as a critical vision of the future sixth generation (6G) networks, which implies the intelligence over the whole network from the core to the edge including end devices.

Networking and Internet Architecture

Gradient Boosting Survival Tree with Applications in Credit Scoring

1 code implementation9 Aug 2019 Miaojun Bai, Yan Zheng, Yun Shen

In order to deal with highly heterogeneous industrial data collected in Chinese market of consumer finance, we propose a nonparametric ensemble tree model called gradient boosting survival tree (GBST) that extends the survival tree models with a gradient boosting algorithm.

Survival Analysis

ATTACK2VEC: Leveraging Temporal Word Embeddings to Understand the Evolution of Cyberattacks

no code implementations29 May 2019 Yun Shen, Gianluca Stringhini

Despite the fact that cyberattacks are constantly growing in complexity, the research community still lacks effective tools to easily monitor and understand them.

Word Embeddings

Tiresias: Predicting Security Events Through Deep Learning

no code implementations24 May 2019 Yun Shen, Enrico Mariconti, Pierre-Antoine Vervier, Gianluca Stringhini

With the increased complexity of modern computer attacks, there is a need for defenders not only to detect malicious activity as it happens, but also to predict the specific steps that will be taken by an adversary when performing an attack.

Collaborative and Privacy-Preserving Machine Teaching via Consensus Optimization

no code implementations7 May 2019 Yufei Han, Yuzhe ma, Christopher Gates, Kevin Roundy, Yun Shen

To address these challenges, we formulate collaborative teaching as a consensus and privacy-preserving optimization process to minimize teaching risk.

Privacy Preserving

Risk-sensitive Reinforcement Learning

no code implementations8 Nov 2013 Yun Shen, Michael J. Tobia, Tobias Sommer, Klaus Obermayer

We derive a family of risk-sensitive reinforcement learning methods for agents, who face sequential decision-making tasks in uncertain environments.

Decision Making Q-Learning +2

Risk-sensitive Markov control processes

no code implementations28 Oct 2011 Yun Shen, Wilhelm Stannat, Klaus Obermayer

We introduce a general framework for measuring risk in the context of Markov control processes with risk maps on general Borel spaces that generalize known concepts of risk measures in mathematical finance, operations research and behavioral economics.

Cannot find the paper you are looking for? You can Submit a new open access paper.