no code implementations • 14 Feb 2024 • Rui Zhang, Hongwei Li, Rui Wen, Wenbo Jiang, Yuan Zhang, Michael Backes, Yun Shen, Yang Zhang
The increasing demand for customized Large Language Models (LLMs) has led to the development of solutions like GPTs.
no code implementations • 18 Dec 2023 • Yiting Qu, Zhikun Zhang, Yun Shen, Michael Backes, Yang Zhang
Take the open-world attribution as an example, FAKEPCD attributes point clouds to known sources with an accuracy of 0. 82-0. 98 and to unknown sources with an accuracy of 0. 73-1. 00.
no code implementations • 3 Nov 2023 • Boyang Zhang, Xinyue Shen, Wai Man Si, Zeyang Sha, Zeyuan Chen, Ahmed Salem, Yun Shen, Michael Backes, Yang Zhang
Moderating offensive, hateful, and toxic language has always been an important but challenging topic in the domain of safe use in NLP.
1 code implementation • 11 Oct 2023 • Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang
Such a Composite Backdoor Attack (CBA) is shown to be stealthier than implanting the same multiple trigger keys in only a single component.
no code implementations • 11 Oct 2023 • Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang
Specifically, the VPPTaaS provider optimizes a visual prompt given downstream data, and downstream users can use this prompt together with the large pre-trained model for prediction.
1 code implementation • 10 Aug 2023 • Xinlei He, Savvas Zannettou, Yun Shen, Yang Zhang
We find that prompt learning achieves around 10\% improvement in the toxicity classification task compared to the baselines, while for the toxic span detection task we find better performance to the best baseline (0. 643 vs. 0. 640 in terms of $F_1$-score).
1 code implementation • 7 Aug 2023 • Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, Yang Zhang
The misuse of large language models (LLMs) has garnered significant attention from the general public and LLM vendors.
1 code implementation • 13 Jun 2023 • Yihan Ma, Zhikun Zhang, Ning Yu, Xinlei He, Michael Backes, Yun Shen, Yang Zhang
Graph generative models become increasingly effective for data distribution approximation and data augmentation.
1 code implementation • 23 Feb 2023 • Boyang Zhang, Xinlei He, Yun Shen, Tianhao Wang, Yang Zhang
Given the simplicity and effectiveness of the attack method, our study indicates scientific plots indeed constitute a valid side channel for model information stealing attacks.
2 code implementations • 3 Jan 2023 • Yugeng Liu, Zheng Li, Michael Backes, Yun Shen, Yang Zhang
A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset.
1 code implementation • 1 Nov 2022 • Yufei Chen, Chao Shen, Yun Shen, Cong Wang, Yang Zhang
In this paper, we investigate the third type of exploitation of data poisoning - increasing the risks of privacy leakage of benign training samples.
no code implementations • 4 Oct 2022 • Xinyue Shen, Xinlei He, Zheng Li, Yun Shen, Michael Backes, Yang Zhang
Different from previous work, we are the first to systematically threat modeling on SSL in every phase of the model supply chain, i. e., pre-training, release, and downstream phases.
no code implementations • 7 Sep 2022 • Mohammad Naseri, Yufei Han, Enrico Mariconti, Yun Shen, Gianluca Stringhini, Emiliano De Cristofaro
Modern defenses against cyberattacks increasingly rely on proactive approaches, e. g., to predict the adversary's next actions based on past events.
1 code implementation • 4 Sep 2022 • Hai Huang, Zhikun Zhang, Yun Shen, Michael Backes, Qi Li, Yang Zhang
Existing studies on neural architecture search (NAS) mainly focus on efficiently and effectively searching for network architectures with better performance.
no code implementations • 14 Apr 2022 • Yun Shen, Yufei Han, Zhikun Zhang, Min Chen, Ting Yu, Michael Backes, Yang Zhang, Gianluca Stringhini
Previous security research efforts orbiting around graphs have been exclusively focusing on either (de-)anonymizing the graphs or understanding the security and privacy issues of graph neural networks.
1 code implementation • 15 Dec 2021 • Yun Shen, Xinlei He, Yufei Han, Yang Zhang
Graph neural networks (GNNs), a new family of machine learning (ML) models, have been proposed to fully leverage graph data to build powerful applications.
1 code implementation • 6 Oct 2021 • Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, Yang Zhang
Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph.
no code implementations • ICLR 2022 • Hongyan Bao, Yufei Han, Yujun Zhou, Yun Shen, Xiangliang Zhang
Characterizing and assessing the adversarial vulnerability of classification models with categorical input has been a practically important, while rarely explored research problem.
no code implementations • 9 Mar 2021 • Yun Shen, Gianluca Stringhini
Unlike commodity anti-malware solutions on desktop systems, their Android counterparts run as sandboxed applications without root privileges and are limited by Android's permission system.
Graph Representation Learning Cryptography and Security
no code implementations • 25 Feb 2021 • Yun Shen, Pierre-Antoine Vervier, Gianluca Stringhini
Mobile phones enable the collection of a wealth of private information, from unique identifiers (e. g., email addresses), to a user's location, to their text messages.
Mobile Security Cryptography and Security
no code implementations • 10 Feb 2021 • Xinlei He, Rui Wen, Yixin Wu, Michael Backes, Yun Shen, Yang Zhang
To fully utilize the information contained in graph data, a new family of machine learning (ML) models, namely graph neural networks (GNNs), has been introduced.
no code implementations • 26 Jul 2020 • Yuben Qu, Chao Dong, Jianchao Zheng, Qihui Wu, Yun Shen, Fan Wu, Alagan Anpalagan
Ubiquitous intelligence has been widely recognized as a critical vision of the future sixth generation (6G) networks, which implies the intelligence over the whole network from the core to the edge including end devices.
Networking and Internet Architecture
1 code implementation • 9 Aug 2019 • Miaojun Bai, Yan Zheng, Yun Shen
In order to deal with highly heterogeneous industrial data collected in Chinese market of consumer finance, we propose a nonparametric ensemble tree model called gradient boosting survival tree (GBST) that extends the survival tree models with a gradient boosting algorithm.
no code implementations • 29 May 2019 • Yun Shen, Gianluca Stringhini
Despite the fact that cyberattacks are constantly growing in complexity, the research community still lacks effective tools to easily monitor and understand them.
no code implementations • 24 May 2019 • Yun Shen, Enrico Mariconti, Pierre-Antoine Vervier, Gianluca Stringhini
With the increased complexity of modern computer attacks, there is a need for defenders not only to detect malicious activity as it happens, but also to predict the specific steps that will be taken by an adversary when performing an attack.
no code implementations • 7 May 2019 • Yufei Han, Yuzhe ma, Christopher Gates, Kevin Roundy, Yun Shen
To address these challenges, we formulate collaborative teaching as a consensus and privacy-preserving optimization process to minimize teaching risk.
no code implementations • 8 Nov 2013 • Yun Shen, Michael J. Tobia, Tobias Sommer, Klaus Obermayer
We derive a family of risk-sensitive reinforcement learning methods for agents, who face sequential decision-making tasks in uncertain environments.
no code implementations • 28 Oct 2011 • Yun Shen, Wilhelm Stannat, Klaus Obermayer
We introduce a general framework for measuring risk in the context of Markov control processes with risk maps on general Borel spaces that generalize known concepts of risk measures in mathematical finance, operations research and behavioral economics.