no code implementations • EMNLP 2021 • Xiaoya Li, Jiwei Li, Xiaofei Sun, Chun Fan, Tianwei Zhang, Fei Wu, Yuxian Meng, Jun Zhang
Out-of-Distribution (OOD) detection is an important problem in natural language processing (NLP).
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
+2
no code implementations • 4 Dec 2023 • Guanlin Li, Naishan Zheng, Man Zhou, Jie Zhang, Tianwei Zhang
However, these works lack analysis of adversarial information or perturbation, which cannot reveal the mystery of adversarial examples and lose proper interpretation.
no code implementations • 4 Dec 2023 • Guanlin Li, Han Qiu, Shangwei Guo, Jiwei Li, Tianwei Zhang
To the best of our knowledge, it is the first work leveraging the observations of kernel dynamics to improve existing AT methods.
no code implementations • 25 Nov 2023 • Bingbing Song, Derui Wang, Tianwei Zhang, Renyang Liu, Yu Lin, Wei Zhou
Hence, it provides a way to directly generate stego images from secret images without a cover image.
no code implementations • 3 Nov 2023 • Xiaofei Sun, Xiaoya Li, Shengyu Zhang, Shuhe Wang, Fei Wu, Jiwei Li, Tianwei Zhang, Guoyin Wang
A standard paradigm for sentiment analysis is to rely on a singular LLM and makes the decision in a single round under the framework of in-context learning.
no code implementations • 11 Oct 2023 • Renyang Liu, Wei Zhou, Tianwei Zhang, Kangjie Chen, Jun Zhao, Kwok-Yan Lam
Existing black-box attacks have demonstrated promising potential in creating adversarial examples (AE) to deceive deep learning models.
no code implementations • 27 Sep 2023 • Guanlin Li, Yifei Chen, Jie Zhang, Jiwei Li, Shangwei Guo, Tianwei Zhang
We propose Warfare, a unified methodology to achieve both attacks in a holistic way.
no code implementations • 21 Aug 2023 • Yutong Wu, Jie Zhang, Florian Kerschbaum, Tianwei Zhang
Users can easily download the word embedding from public websites like Civitai and add it to their own stable diffusion model without fine-tuning for personalization.
1 code implementation • 21 Aug 2023 • Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei Zhang, Fei Wu, Guoyin Wang
This paper surveys research works in the quickly advancing field of instruction tuning (IT), a crucial technique to enhance the capabilities and controllability of large language models (LLMs).
1 code implementation • ICCV 2023 • Jianshuo Dong, Han Qiu, Yiming Li, Tianwei Zhang, Yuanjie Li, Zeqi Lai, Chao Zhang, Shu-Tao Xia
We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release.
no code implementations • 2 Aug 2023 • Xiaobei Yan, Xiaoxuan Lou, Guowen Xu, Han Qiu, Shangwei Guo, Chip Hong Chang, Tianwei Zhang
One big concern about the usage of the accelerators is the confidentiality of the deployed models: model inference execution on the accelerators could leak side-channel information, which enables an adversary to preciously recover the model details.
1 code implementation • 14 Jul 2023 • Guanlin Li, Guowen Xu, Tianwei Zhang
This framework consists of two components: (1) a new training strategy inspired by the effective number to guide the model to generate more balanced and informative AEs; (2) a carefully constructed penalty function to force a satisfactory feature space.
1 code implementation • 14 Jul 2023 • Guanlin Li, Kangjie Chen, Yuan Xu, Han Qiu, Tianwei Zhang
We first introduce an oracle into the adversarial training process to help the model learn a correct data-label conditional distribution.
no code implementations • 16 Jun 2023 • Xiaofei Sun, Linfeng Dong, Xiaoya Li, Zhen Wan, Shuhe Wang, Tianwei Zhang, Jiwei Li, Fei Cheng, Lingjuan Lyu, Fei Wu, Guoyin Wang
In this work, we propose a collection of general modules to address these issues, in an attempt to push the limits of ChatGPT on NLP tasks.
no code implementations • 14 Jun 2023 • Yanzhou Li, Shangqing Liu, Kangjie Chen, Xiaofei Xie, Tianwei Zhang, Yang Liu
We evaluate our approach on two code understanding tasks and three code generation tasks over seven datasets.
no code implementations • 8 Jun 2023 • Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu
We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection.
no code implementations • 23 May 2023 • Yi Liu, Gelei Deng, Zhengzi Xu, Yuekang Li, Yaowen Zheng, Ying Zhang, Lida Zhao, Tianwei Zhang, Yang Liu
Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts.
1 code implementation • 15 May 2023 • Xiaofei Sun, Xiaoya Li, Jiwei Li, Fei Wu, Shangwei Guo, Tianwei Zhang, Guoyin Wang
This is due to (1) the lack of reasoning ability in addressing complex linguistic phenomena (e. g., intensification, contrast, irony etc); (2) limited number of tokens allowed in in-context learning.
2 code implementations • 20 Apr 2023 • Shuhe Wang, Xiaofei Sun, Xiaoya Li, Rongbin Ouyang, Fei Wu, Tianwei Zhang, Jiwei Li, Guoyin Wang
GPT-NER bridges the gap by transforming the sequence labeling task to a generation task that can be easily adapted by LLMs e. g., the task of finding location entities in the input text "Columbus is a city" is transformed to generate the text sequence "@@Columbus## is a city", where special tokens @@## marks the entity to extract.
no code implementations • 25 Mar 2023 • Xukun Zhou, Jiwei Li, Tianwei Zhang, Lingjuan Lyu, Muqiao Yang, Jun He
Backdoor attack aims at inducing neural models to make incorrect predictions for poison data while keeping predictions on the clean dataset unchanged, which creates a considerable threat to current natural language processing (NLP) systems.
no code implementations • 2 Mar 2023 • Meng Zhang, Qinghao Hu, Peng Sun, Yonggang Wen, Tianwei Zhang
Training Graph Neural Networks (GNNs) on large graphs is challenging due to the conflict between the high memory demand and limited GPU memory.
no code implementations • CVPR 2023 • Wenbo Jiang, Hongwei Li, Guowen Xu, Tianwei Zhang
To make the trigger more imperceptible and human-unnoticeable, a variety of stealthy backdoor attacks have been proposed, some works employ imperceptible perturbations as the backdoor triggers, which restrict the pixel differences of the triggered image and clean image.
no code implementations • ICCV 2023 • Yutong Wu, Xingshuo Han, Han Qiu, Tianwei Zhang
To address such limitations, we propose a novel confidence-based scoring methodology, which can efficiently measure the contribution of each poisoning sample based on the distance posteriors.
no code implementations • ICCV 2023 • Haosen Shi, Shen Ren, Tianwei Zhang, Sinno Jialin Pan
A scheduling mechanism following the concept of curriculum learning is also designed to progressively change the sharing strategy to increase the level of sharing during the learning process.
1 code implementation • 22 Dec 2022 • Tian Dong, Ziyuan Zhang, Han Qiu, Tianwei Zhang, Hewu Li, Terry Wang
Transforming off-the-shelf deep neural network (DNN) models into dynamic multi-exit architectures can achieve inference and transmission efficiency by fragmenting and distributing a large DNN model in edge computing scenarios (e. g., edge devices and cloud servers).
1 code implementation • 5 Dec 2022 • Shuhe Wang, Yuxian Meng, Rongbin Ouyang, Jiwei Li, Tianwei Zhang, Lingjuan Lyu, Guoyin Wang
To better handle long-tail cases in the sequence labeling (SL) task, in this work, we introduce graph neural networks sequence labeling (GNN-SL), which augments the vanilla SL model output with similar tagging examples retrieved from the whole training set.
1 code implementation • 24 Nov 2022 • Guanlin Li, Guowen Xu, Tianwei Zhang
In this paper, we consider the instance segmentation task on a long-tailed dataset, which contains label noise, i. e., some of the annotations are incorrect.
1 code implementation • 21 Sep 2022 • Hui En Pang, Zhongang Cai, Lei Yang, Tianwei Zhang, Ziwei Liu
Experiments with 10 backbones, ranging from CNNs to transformers, show the knowledge learnt from a proximity task is readily transferable to human mesh recovery.
no code implementations • 24 May 2022 • Shudong Zhang, Haichang Gao, Tianwei Zhang, Yunyi Zhou, Zihui Wu
Adversarial training (AT) has proven to be one of the most effective ways to defend Deep Neural Networks (DNNs) against adversarial attacks.
no code implementations • 24 May 2022 • Wei Gao, Qinghao Hu, Zhisheng Ye, Peng Sun, Xiaolin Wang, Yingwei Luo, Tianwei Zhang, Yonggang Wen
Deep learning (DL) shows its prosperity in a wide variety of fields.
no code implementations • 7 Apr 2022 • Xiaoxuan Lou, Guowen Xu, Kangjie Chen, Guanlin Li, Jiwei Li, Tianwei Zhang
Multiplication-less neural networks significantly reduce the time and energy cost on the hardware platform, as the compute-intensive multiplications are replaced with lightweight bit-shift operations.
1 code implementation • 31 Mar 2022 • Shuhe Wang, Xiaoya Li, Yuxian Meng, Tianwei Zhang, Rongbin Ouyang, Jiwei Li, Guoyin Wang
Inspired by recent advances in retrieval augmented methods in NLP~\citep{khandelwal2019generalization, khandelwal2020nearest, meng2021gnn}, in this paper, we introduce a $k$ nearest neighbor NER ($k$NN-NER) framework, which augments the distribution of entity labels by assigning $k$ nearest neighbors retrieved from the training set.
no code implementations • 2 Mar 2022 • Xingshuo Han, Guowen Xu, Yuan Zhou, Xuehuan Yang, Jiwei Li, Tianwei Zhang
However, DNN models are vulnerable to different types of adversarial attacks, which pose significant risks to the security and safety of the vehicles and passengers.
no code implementations • 20 Jan 2022 • Yutong Wu, Han Qiu, Tianwei Zhang, Jiwei L, Meikang Qiu
It is challenging to migrate existing watermarking techniques from the classification tasks to the contrastive learning scenario, as the owner of the encoder lacks the knowledge of the downstream tasks which will be developed from the encoder in the future.
no code implementations • 15 Dec 2021 • Shuhe Wang, Jiwei Li, Yuxian Meng, Rongbin Ouyang, Guoyin Wang, Xiaoya Li, Tianwei Zhang, Shi Zong
The core idea of Faster $k$NN-MT is to use a hierarchical clustering strategy to approximate the distance between the query and a data point in the datastore, which is decomposed into two parts: the distance between the query and the center of the cluster that the data point belongs to, and the distance between the data point and the cluster center.
no code implementations • 29 Nov 2021 • Xiaofei Sun, Jiwei Li, Xiaoya Li, Ziyao Wang, Tianwei Zhang, Han Qiu, Fei Wu, Chun Fan
In this work, we propose a new and general framework to defend against backdoor attacks, inspired by the fact that attack triggers usually follow a \textsc{specific} type of attacking pattern, and therefore, poisoned training examples have greater impacts on each other during training.
1 code implementation • NAACL 2022 • Leilei Gan, Jiwei Li, Tianwei Zhang, Xiaoya Li, Yuxian Meng, Fei Wu, Yi Yang, Shangwei Guo, Chun Fan
To deal with this issue, in this paper, we propose a new strategy to perform textual backdoor attacks which do not require an external trigger, and the poisoned samples are correctly labeled.
no code implementations • 20 Oct 2021 • Xiaofei Sun, Diyi Yang, Xiaoya Li, Tianwei Zhang, Yuxian Meng, Han Qiu, Guoyin Wang, Eduard Hovy, Jiwei Li
Neural network models have achieved state-of-the-art performances in a wide range of natural language processing (NLP) tasks.
1 code implementation • ICLR 2022 • Yuxian Meng, Shi Zong, Xiaoya Li, Xiaofei Sun, Tianwei Zhang, Fei Wu, Jiwei Li
Inspired by the notion that ``{\it to copy is easier than to memorize}``, in this work, we introduce GNN-LM, which extends the vanilla neural language model (LM) by allowing to reference similar contexts in the entire training corpus.
no code implementations • 7 Oct 2021 • Tian Dong, Han Qiu, Tianwei Zhang, Jiwei Li, Hewu Li, Jialiang Lu
Specifically, we design an effective method to generate a set of fingerprint samples to craft the inference process with a unique and robust inference time cost as the evidence for model ownership.
no code implementations • ICLR 2022 • Kangjie Chen, Yuxian Meng, Xiaofei Sun, Shangwei Guo, Tianwei Zhang, Jiwei Li, Chun Fan
The key feature of our attack is that the adversary does not need prior information about the downstream tasks when implanting the backdoor to the pre-trained model.
no code implementations • 29 Sep 2021 • Xiaoxuan Lou, Shangwei Guo, Tianwei Zhang, Jiwei Li, Yinqian Zhang, Yang Liu
We present a novel watermarking scheme to achieve the intellectual property (IP) protection and ownership verification of DNN architectures.
no code implementations • ICLR 2022 • Xiaoxuan Lou, Shangwei Guo, Jiwei Li, Yaoxin Wu, Tianwei Zhang
We present NASPY, an end-to-end adversarial framework to extract the networkarchitecture of deep learning models from Neural Architecture Search (NAS).
no code implementations • 29 Sep 2021 • Hanxiao Chen, Meng Hao, Hongwei Li, Guangxiao Niu, Guowen Xu, Huawei Wang, Yuan Zhang, Tianwei Zhang
Heterogeneous federated learning (HFL) enables clients with different computation/communication capabilities to collaboratively train their own customized models, in which the knowledge of models is shared via clients' predictions on a public dataset.
no code implementations • 29 Sep 2021 • Guanlin Li, Guowen Xu, Han Qiu, Ruan He, Jiwei Li, Tianwei Zhang
Extensive evaluations indicate the integration of the two techniques provides much more robustness than existing defense solutions for 3D models.
1 code implementation • 3 Sep 2021 • Qinghao Hu, Peng Sun, Shengen Yan, Yonggang Wen, Tianwei Zhang
Modern GPU datacenters are critical for delivering Deep Learning (DL) models and services in both the research community and industry.
1 code implementation • 29 Aug 2021 • Xiaoya Li, Jiwei Li, Xiaofei Sun, Chun Fan, Tianwei Zhang, Fei Wu, Yuxian Meng, Jun Zhang
For a task with $k$ training labels, $k$Folden induces $k$ sub-models, each of which is trained on a subset with $k-1$ categories with the left category masked unknown to the sub-model.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
+2
no code implementations • 3 Aug 2021 • Tianwei Zhang, Huayan Zhang, Xiaofei Li, Junfeng Chen, Tin Lun Lam, Sethu Vijayakumar
Dynamic objects in the environment, such as people and other agents, lead to challenges for existing simultaneous localization and mapping (SLAM) approaches.
no code implementations • 2 Aug 2021 • Huayan Zhang, Tianwei Zhang, Tin Lun Lam, Sethu Vijayakumar
Dynamic environments that include unstructured moving objects pose a hard problem for Simultaneous Localization and Mapping (SLAM) performance.
no code implementations • 19 Jun 2021 • Guanlin Li, Guowen Xu, Han Qiu, Shangwei Guo, Run Wang, Jiwei Li, Tianwei Zhang, Rongxing Lu
In this paper, we present the first fingerprinting scheme for the Intellectual Property (IP) protection of GANs.
1 code implementation • 3 Jun 2021 • Xiaofei Sun, Xiaoya Li, Yuxian Meng, Xiang Ao, Lingjuan Lyu, Jiwei Li, Tianwei Zhang
The frustratingly fragile nature of neural network models make current natural language generation (NLG) systems prone to backdoor attacks and generate malicious sequences that could be sexist or offensive.
1 code implementation • Findings (ACL) 2022 • Yuxian Meng, Xiaoya Li, Xiayu Zheng, Fei Wu, Xiaofei Sun, Tianwei Zhang, Jiwei Li
Fast $k$NN-MT constructs a significantly smaller datastore for the nearest neighbor search: for each word in a source sentence, Fast $k$NN-MT first selects its nearest token-level neighbors, which is limited to tokens that are the same as the query token.
1 code implementation • 30 May 2021 • Shuhe Wang, Yuxian Meng, Xiaofei Sun, Fei Wu, Rongbin Ouyang, Rui Yan, Tianwei Zhang, Jiwei Li
Specifically, we propose to model the mutual dependency between text-visual features, where the model not only needs to learn the probability of generating the next dialog utterance given preceding dialog utterances and visual contexts, but also the probability of predicting the visual features in which a dialog utterance takes place, leading the generated dialog utterance specific to the visual context.
no code implementations • 30 May 2021 • Chun Fan, Yuxian Meng, Xiaofei Sun, Fei Wu, Tianwei Zhang, Jiwei Li
Next, based on this recurrent net that is able to generalize SEIR simulations, we are able to transform the objective to a differentiable one with respect to $\Theta_\text{SEIR}$, and straightforwardly obtain its optimal value.
no code implementations • 17 May 2021 • Xiaofei Sun, Yuxian Meng, Xiang Ao, Fei Wu, Tianwei Zhang, Jiwei Li, Chun Fan
The proposed framework is based on the core idea that the meaning of a sentence should be defined by its contexts, and that sentence similarity can be measured by comparing the probabilities of generating two sentences given the same context.
no code implementations • 11 Mar 2021 • Xunchuan Liu, Y. Wu, C. Zhang, X. Chen, L. -H. Lin, S. -L. Qin, T. Liu, C. Henkel, J. Wang, H. -L. Liu, J. Yuan, L. -X. Yuan, J. Li, Z. -Q. Shen, D. Li, J. Esimbek, K. Wang, L. -X. Li, Kee-Tae Kim, L. Zhu, D. Madones, N. Inostroza, F. -Y. Meng, Tianwei Zhang, K. Tatematsu, Y. Xu, B. -G. Ju, A. Kraus, F. -W. Xu
The signposts of ongoing SCCC and the broadened line widths of C$_3$S and C$_4$H in L1251-1 as well as the distribution of HC$_3$N are also related to outflow activities in this region.
Astrophysics of Galaxies Solar and Stellar Astrophysics
no code implementations • 4 Jan 2021 • Tao Xiang, Hangcheng Liu, Shangwei Guo, Tianwei Zhang, Xiaofeng Liao
Based on this property, we identify the discriminative areas of a given clean example easily for local perturbations.
no code implementations • 13 Dec 2020 • Han Qiu, Yi Zeng, Shangwei Guo, Tianwei Zhang, Meikang Qiu, Bhavani Thuraisingham
In this paper, we investigate the effectiveness of data augmentation techniques in mitigating backdoor attacks and enhancing DL models' robustness.
1 code implementation • 3 Dec 2020 • Han Qiu, Yi Zeng, Tianwei Zhang, Yong Jiang, Meikang Qiu
With more and more advanced adversarial attack methods have been developed, a quantity of corresponding defense solutions were designed to enhance the robustness of DNN models.
2 code implementations • CVPR 2021 • Wei Gao, Shangwei Guo, Tianwei Zhang, Han Qiu, Yonggang Wen, Yang Liu
Comprehensive evaluations demonstrate that the policies discovered by our method can defeat existing reconstruction attacks in collaborative learning, with high efficiency and negligible impact on the model performance.
no code implementations • 21 Oct 2020 • Ran Long, Christian Rauch, Tianwei Zhang, Vladimir Ivan, Sethu Vijayakumar
Here, we propose to treat all dynamic parts as one rigid body and simultaneously segment and track both static and dynamic components.
Robotics
no code implementations • 18 Sep 2020 • Shangwei Guo, Tianwei Zhang, Han Qiu, Yi Zeng, Tao Xiang, Yang Liu
In this paper, we propose a novel watermark removal attack from a different perspective.
no code implementations • 4 Jul 2020 • Yang Li, Tianwei Zhang, Yoshihiko Nakamura, Tatsuya Harada
We present SplitFusion, a novel dense RGB-D SLAM framework that simultaneously performs tracking and dense reconstruction for both rigid and non-rigid components of the scene.
1 code implementation • 26 Jun 2020 • Kaidi Jin, Tianwei Zhang, Chao Shen, Yufei Chen, Ming Fan, Chenhao Lin, Ting Liu
It is unknown whether there are any connections and common characteristics between the defenses against these two attacks.
no code implementations • 14 Jun 2020 • Shangwei Guo, Tianwei Zhang, Guowen Xu, Han Yu, Tao Xiang, Yang Liu
In this paper, we design Top-DP, a novel solution to optimize the differential privacy protection of decentralized image classification systems.
no code implementations • 9 Jun 2020 • Kangjie Chen, Shangwei Guo, Tianwei Zhang, Xiaofei Xie, Yang Liu
This paper presents the first model extraction attack against Deep Reinforcement Learning (DRL), which enables an external adversary to precisely recover a black-box DRL model only from its interaction with the environment.
1 code implementation • 27 May 2020 • Han Qiu, Yi Zeng, Qinkai Zheng, Tianwei Zhang, Meikang Qiu, Gerard Memmi
Extensive evaluations indicate that our solutions can effectively mitigate all existing standard and advanced attack techniques, and beat 11 state-of-the-art defense solutions published in top-tier conferences over the past 2 years.
no code implementations • 14 May 2020 • Jianwen Sun, Tianwei Zhang, Xiaofei Xie, Lei Ma, Yan Zheng, Kangjie Chen, Yang Liu
Adversarial attacks against conventional Deep Learning (DL) systems and algorithms have been widely studied, and various defenses were proposed.
no code implementations • CVPR 2020 • Yang Li, Aljaž Božič, Tianwei Zhang, Yanli Ji, Tatsuya Harada, Matthias Nießner
One of the widespread solutions for non-rigid tracking has a nested-loop structure: with Gauss-Newton to minimize a tracking objective in the outer loop, and Preconditioned Conjugate Gradient (PCG) to solve a sparse linear system in the inner loop.
no code implementations • 11 Mar 2020 • Tianwei Zhang, Huayan Zhang, Yang Li, Yoshihiko Nakamura, Lei Zhang
Dynamic environments are challenging for visual SLAM since the moving objects occlude the static environment features and lead to wrong camera motion estimation.
no code implementations • 20 Feb 2020 • Shangwei Guo, Tianwei Zhang, Han Yu, Xiaofei Xie, Lei Ma, Tao Xiang, Yang Liu
It guarantees that each benign node in a decentralized system can train a correct model under very strong Byzantine attacks with an arbitrary number of faulty nodes.
no code implementations • 9 Aug 2018 • Zecheng He, Tianwei Zhang, Ruby B. Lee
Even small weight changes can be clearly reflected in the model outputs, and observed by the customer.
no code implementations • 5 Jul 2018 • Tianwei Zhang, Zecheng He, Ruby B. Lee
While it is prevalent to outsource model training and serving tasks in the cloud, it is important to protect the privacy of sensitive samples in the training dataset and prevent information leakage to untrusted third parties.
no code implementations • 8 Mar 2018 • Shuqing Bian, Zhenpeng Deng, Fei Li, Will Monroe, Peng Shi, Zijun Sun, Wei Wu, Sikuang Wang, William Yang Wang, Arianna Yuan, Tianwei Zhang, Jiwei Li
For the best setting, the proposed system is able to identify scam ICO projects with 0. 83 precision.