Search Results for author: Tianwei Zhang

Found 86 papers, 27 papers with code

Self-Supervised Learning for Medical Image Data with Anatomy-Oriented Imaging Planes

no code implementations25 Mar 2024 Tianwei Zhang, Dong Wei, Mengmeng Zhu, Shi Gu, Yefeng Zheng

In this work, we propose two complementary pretext tasks for this group of medical image data based on the spatial relationship of the imaging planes.

Anatomy Representation Learning +3

BadEdit: Backdooring large language models by model editing

no code implementations20 Mar 2024 Yanzhou Li, Tianlin Li, Kangjie Chen, Jian Zhang, Shangqing Liu, Wenhan Wang, Tianwei Zhang, Yang Liu

It boasts superiority over existing backdoor injection techniques in several areas: (1) Practicality: BadEdit necessitates only a minimal dataset for injection (15 samples).

Backdoor Attack knowledge editing

Fluent: Round-efficient Secure Aggregation for Private Federated Learning

no code implementations10 Mar 2024 Xincheng Li, Jianting Ning, Geong Sen Poh, Leo Yu Zhang, Xinchun Yin, Tianwei Zhang

Fluent also reduces the communication overhead for the server at the expense of a marginal increase in computational cost.

Federated Learning

Model X-ray:Detect Backdoored Models via Decision Boundary

no code implementations27 Feb 2024 Yanghao Su, Jie Zhang, Ting Xu, Tianwei Zhang, Weiming Zhang, Nenghai Yu

To address it, in this paper, we begin by presenting an intriguing observation: the decision boundary of the backdoored model exhibits a greater degree of closeness than that of the clean model.

PRIME: Protect Your Videos From Malicious Editing

1 code implementation2 Feb 2024 Guanlin Li, Shuai Yang, Jie Zhang, Tianwei Zhang

With the development of generative models, the quality of generated content keeps increasing.

TransTroj: Transferable Backdoor Attacks to Pre-trained Models via Embedding Indistinguishability

1 code implementation29 Jan 2024 Hao Wang, Tao Xiang, Shangwei Guo, Jialing He, Hangcheng Liu, Tianwei Zhang

Adopting untrusted PTMs may suffer from backdoor attacks, where the adversary can compromise the downstream models by injecting backdoors into the PTM.

Backdoor Attack

Digger: Detecting Copyright Content Mis-usage in Large Language Model Training

no code implementations1 Jan 2024 Haodong Li, Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu, Guoai Xu, Guosheng Xu, Haoyu Wang

In this paper, we introduce a detailed framework designed to detect and assess the presence of content from potentially copyrighted books within the training datasets of LLMs.

Language Modelling Large Language Model +1

SAME: Sample Reconstruction against Model Extraction Attacks

1 code implementation17 Dec 2023 Yi Xie, Jie Zhang, Shiqian Zhao, Tianwei Zhang, Xiaofeng Chen

While deep learning models have shown significant performance across various domains, their deployment needs extensive resources and advanced computing infrastructure.

Model extraction

Towards Robust and Expressive Whole-body Human Pose and Shape Estimation

1 code implementation NeurIPS 2023 Hui EnPang, Zhongang Cai, Lei Yang, Qingyi Tao, Zhonghua Wu, Tianwei Zhang, Ziwei Liu

Whole-body pose and shape estimation aims to jointly predict different behaviors (e. g., pose, hand gesture, facial expression) of the entire human body from a monocular image.

The Earth is Flat because...: Investigating LLMs' Belief towards Misinformation via Persuasive Conversation

no code implementations14 Dec 2023 Rongwu Xu, Brian S. Lin, Shujian Yang, Tianqi Zhang, Weiyan Shi, Tianwei Zhang, Zhixuan Fang, Wei Xu, Han Qiu

Therefore, in this study, we delve into LLMs' susceptibility to persuasive conversations, particularly on factual questions that they can answer correctly.

Misinformation

Singular Regularization with Information Bottleneck Improves Model's Adversarial Robustness

no code implementations4 Dec 2023 Guanlin Li, Naishan Zheng, Man Zhou, Jie Zhang, Tianwei Zhang

However, these works lack analysis of adversarial information or perturbation, which cannot reveal the mystery of adversarial examples and lose proper interpretation.

Adversarial Robustness

Rethinking Adversarial Training with Neural Tangent Kernel

no code implementations4 Dec 2023 Guanlin Li, Han Qiu, Shangwei Guo, Jiwei Li, Tianwei Zhang

To the best of our knowledge, it is the first work leveraging the observations of kernel dynamics to improve existing AT methods.

Double-Flow-based Steganography without Embedding for Image-to-Image Hiding

no code implementations25 Nov 2023 Bingbing Song, Derui Wang, Tianwei Zhang, Renyang Liu, Yu Lin, Wei Zhou

Hence, it provides a way to directly generate stego images from secret images without a cover image.

Steganalysis

Sentiment Analysis through LLM Negotiations

no code implementations3 Nov 2023 Xiaofei Sun, Xiaoya Li, Shengyu Zhang, Shuhe Wang, Fei Wu, Jiwei Li, Tianwei Zhang, Guoyin Wang

A standard paradigm for sentiment analysis is to rely on a singular LLM and makes the decision in a single round under the framework of in-context learning.

In-Context Learning Sentiment Analysis +1

Boosting Black-box Attack to Deep Neural Networks with Conditional Diffusion Models

no code implementations11 Oct 2023 Renyang Liu, Wei Zhou, Tianwei Zhang, Kangjie Chen, Jun Zhao, Kwok-Yan Lam

Existing black-box attacks have demonstrated promising potential in creating adversarial examples (AE) to deceive deep learning models.

Denoising

Instruction Tuning for Large Language Models: A Survey

1 code implementation21 Aug 2023 Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei Zhang, Fei Wu, Guoyin Wang

This paper surveys research works in the quickly advancing field of instruction tuning (IT), a crucial technique to enhance the capabilities and controllability of large language models (LLMs).

Backdooring Textual Inversion for Concept Censorship

no code implementations21 Aug 2023 Yutong Wu, Jie Zhang, Florian Kerschbaum, Tianwei Zhang

Users can easily download the word embedding from public websites like Civitai and add it to their own stable diffusion model without fine-tuning for personalization.

One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training

1 code implementation ICCV 2023 Jianshuo Dong, Han Qiu, Yiming Li, Tianwei Zhang, Yuanjie Li, Zeqi Lai, Chao Zhang, Shu-Tao Xia

We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release.

Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator

no code implementations2 Aug 2023 Xiaobei Yan, Xiaoxuan Lou, Guowen Xu, Han Qiu, Shangwei Guo, Chip Hong Chang, Tianwei Zhang

One big concern about the usage of the accelerators is the confidentiality of the deployed models: model inference execution on the accelerators could leak side-channel information, which enables an adversary to preciously recover the model details.

Model extraction

Alleviating the Effect of Data Imbalance on Adversarial Training

1 code implementation14 Jul 2023 Guanlin Li, Guowen Xu, Tianwei Zhang

This framework consists of two components: (1) a new training strategy inspired by the effective number to guide the model to generate more balanced and informative AEs; (2) a carefully constructed penalty function to force a satisfactory feature space.

Omnipotent Adversarial Training in the Wild

1 code implementation14 Jul 2023 Guanlin Li, Kangjie Chen, Yuan Xu, Han Qiu, Tianwei Zhang

We first introduce an oracle into the adversarial training process to help the model learn a correct data-label conditional distribution.

Adversarial Robustness

Pushing the Limits of ChatGPT on NLP Tasks

no code implementations16 Jun 2023 Xiaofei Sun, Linfeng Dong, Xiaoya Li, Zhen Wan, Shuhe Wang, Tianwei Zhang, Jiwei Li, Fei Cheng, Lingjuan Lyu, Fei Wu, Guoyin Wang

In this work, we propose a collection of general modules to address these issues, in an attempt to push the limits of ChatGPT on NLP tasks.

Dependency Parsing Event Extraction +9

Multi-target Backdoor Attacks for Code Pre-trained Models

no code implementations14 Jun 2023 Yanzhou Li, Shangqing Liu, Kangjie Chen, Xiaofei Xie, Tianwei Zhang, Yang Liu

We evaluate our approach on two code understanding tasks and three code generation tasks over seven datasets.

Code Generation Representation Learning

Prompt Injection attack against LLM-integrated Applications

no code implementations8 Jun 2023 Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, ZiHao Wang, XiaoFeng Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu

We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection.

Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study

no code implementations23 May 2023 Yi Liu, Gelei Deng, Zhengzi Xu, Yuekang Li, Yaowen Zheng, Ying Zhang, Lida Zhao, Tianwei Zhang, Kailong Wang, Yang Liu

Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts.

Prompt Engineering

Text Classification via Large Language Models

1 code implementation15 May 2023 Xiaofei Sun, Xiaoya Li, Jiwei Li, Fei Wu, Shangwei Guo, Tianwei Zhang, Guoyin Wang

This is due to (1) the lack of reasoning ability in addressing complex linguistic phenomena (e. g., intensification, contrast, irony etc); (2) limited number of tokens allowed in in-context learning.

Domain Adaptation In-Context Learning +3

GPT-NER: Named Entity Recognition via Large Language Models

1 code implementation20 Apr 2023 Shuhe Wang, Xiaofei Sun, Xiaoya Li, Rongbin Ouyang, Fei Wu, Tianwei Zhang, Jiwei Li, Guoyin Wang

GPT-NER bridges the gap by transforming the sequence labeling task to a generation task that can be easily adapted by LLMs e. g., the task of finding location entities in the input text "Columbus is a city" is transformed to generate the text sequence "@@Columbus## is a city", where special tokens @@## marks the entity to extract.

Hallucination named-entity-recognition +4

Backdoor Attacks with Input-unique Triggers in NLP

no code implementations25 Mar 2023 Xukun Zhou, Jiwei Li, Tianwei Zhang, Lingjuan Lyu, Muqiao Yang, Jun He

Backdoor attack aims at inducing neural models to make incorrect predictions for poison data while keeping predictions on the clean dataset unchanged, which creates a considerable threat to current natural language processing (NLP) systems.

Backdoor Attack Language Modelling +1

Boosting Distributed Full-graph GNN Training with Asynchronous One-bit Communication

no code implementations2 Mar 2023 Meng Zhang, Qinghao Hu, Peng Sun, Yonggang Wen, Tianwei Zhang

Training Graph Neural Networks (GNNs) on large graphs is challenging due to the conflict between the high memory demand and limited GPU memory.

Quantization

Computation and Data Efficient Backdoor Attacks

no code implementations ICCV 2023 Yutong Wu, Xingshuo Han, Han Qiu, Tianwei Zhang

To address such limitations, we propose a novel confidence-based scoring methodology, which can efficiently measure the contribution of each poisoning sample based on the distance posteriors.

3D Point Cloud Classification Data Poisoning +2

Deep Multitask Learning with Progressive Parameter Sharing

no code implementations ICCV 2023 Haosen Shi, Shen Ren, Tianwei Zhang, Sinno Jialin Pan

A scheduling mechanism following the concept of curriculum learning is also designed to progressively change the sharing strategy to increase the level of sharing during the learning process.

Scheduling

Color Backdoor: A Robust Poisoning Attack in Color Space

no code implementations CVPR 2023 Wenbo Jiang, Hongwei Li, Guowen Xu, Tianwei Zhang

To make the trigger more imperceptible and human-unnoticeable, a variety of stealthy backdoor attacks have been proposed, some works employ imperceptible perturbations as the backdoor triggers, which restrict the pixel differences of the triggered image and clean image.

Backdoor Attack SSIM

Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network in Edge Computing

1 code implementation22 Dec 2022 Tian Dong, Ziyuan Zhang, Han Qiu, Tianwei Zhang, Hewu Li, Terry Wang

Transforming off-the-shelf deep neural network (DNN) models into dynamic multi-exit architectures can achieve inference and transmission efficiency by fragmenting and distributing a large DNN model in edge computing scenarios (e. g., edge devices and cloud servers).

Backdoor Attack Edge-computing

GNN-SL: Sequence Labeling Based on Nearest Examples via GNN

1 code implementation5 Dec 2022 Shuhe Wang, Yuxian Meng, Rongbin Ouyang, Jiwei Li, Tianwei Zhang, Lingjuan Lyu, Guoyin Wang

To better handle long-tail cases in the sequence labeling (SL) task, in this work, we introduce graph neural networks sequence labeling (GNN-SL), which augments the vanilla SL model output with similar tagging examples retrieved from the whole training set.

Chinese Word Segmentation named-entity-recognition +4

A Benchmark of Long-tailed Instance Segmentation with Noisy Labels

1 code implementation24 Nov 2022 Guanlin Li, Guowen Xu, Tianwei Zhang

In this paper, we consider the instance segmentation task on a long-tailed dataset, which contains label noise, i. e., some of the annotations are incorrect.

Instance Segmentation Segmentation +1

Benchmarking and Analyzing 3D Human Pose and Shape Estimation Beyond Algorithms

1 code implementation21 Sep 2022 Hui En Pang, Zhongang Cai, Lei Yang, Tianwei Zhang, Ziwei Liu

Experiments with 10 backbones, ranging from CNNs to transformers, show the knowledge learnt from a proximity task is readily transferable to human mesh recovery.

3D human pose and shape estimation Benchmarking +1

Alleviating Robust Overfitting of Adversarial Training With Consistency Regularization

no code implementations24 May 2022 Shudong Zhang, Haichang Gao, Tianwei Zhang, Yunyi Zhou, Zihui Wu

Adversarial training (AT) has proven to be one of the most effective ways to defend Deep Neural Networks (DNNs) against adversarial attacks.

ShiftNAS: Towards Automatic Generation of Advanced Mulitplication-Less Neural Networks

no code implementations7 Apr 2022 Xiaoxuan Lou, Guowen Xu, Kangjie Chen, Guanlin Li, Jiwei Li, Tianwei Zhang

Multiplication-less neural networks significantly reduce the time and energy cost on the hardware platform, as the compute-intensive multiplications are replaced with lightweight bit-shift operations.

Neural Architecture Search

$k$NN-NER: Named Entity Recognition with Nearest Neighbor Search

1 code implementation31 Mar 2022 Shuhe Wang, Xiaoya Li, Yuxian Meng, Tianwei Zhang, Rongbin Ouyang, Jiwei Li, Guoyin Wang

Inspired by recent advances in retrieval augmented methods in NLP~\citep{khandelwal2019generalization, khandelwal2020nearest, meng2021gnn}, in this paper, we introduce a $k$ nearest neighbor NER ($k$NN-NER) framework, which augments the distribution of entity labels by assigning $k$ nearest neighbors retrieved from the training set.

Few-Shot Learning named-entity-recognition +3

Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving

no code implementations2 Mar 2022 Xingshuo Han, Guowen Xu, Yuan Zhou, Xuehuan Yang, Jiwei Li, Tianwei Zhang

However, DNN models are vulnerable to different types of adversarial attacks, which pose significant risks to the security and safety of the vehicles and passengers.

Autonomous Driving Backdoor Attack +1

Watermarking Pre-trained Encoders in Contrastive Learning

no code implementations20 Jan 2022 Yutong Wu, Han Qiu, Tianwei Zhang, Jiwei L, Meikang Qiu

It is challenging to migrate existing watermarking techniques from the classification tasks to the contrastive learning scenario, as the owner of the encoder lacks the knowledge of the downstream tasks which will be developed from the encoder in the future.

Contrastive Learning

Faster Nearest Neighbor Machine Translation

no code implementations15 Dec 2021 Shuhe Wang, Jiwei Li, Yuxian Meng, Rongbin Ouyang, Guoyin Wang, Xiaoya Li, Tianwei Zhang, Shi Zong

The core idea of Faster $k$NN-MT is to use a hierarchical clustering strategy to approximate the distance between the query and a data point in the datastore, which is decomposed into two parts: the distance between the query and the center of the cluster that the data point belongs to, and the distance between the data point and the cluster center.

Machine Translation Translation

A General Framework for Defending Against Backdoor Attacks via Influence Graph

no code implementations29 Nov 2021 Xiaofei Sun, Jiwei Li, Xiaoya Li, Ziyao Wang, Tianwei Zhang, Han Qiu, Fei Wu, Chun Fan

In this work, we propose a new and general framework to defend against backdoor attacks, inspired by the fact that attack triggers usually follow a \textsc{specific} type of attacking pattern, and therefore, poisoned training examples have greater impacts on each other during training.

Triggerless Backdoor Attack for NLP Tasks with Clean Labels

2 code implementations NAACL 2022 Leilei Gan, Jiwei Li, Tianwei Zhang, Xiaoya Li, Yuxian Meng, Fei Wu, Yi Yang, Shangwei Guo, Chun Fan

To deal with this issue, in this paper, we propose a new strategy to perform textual backdoor attacks which do not require an external trigger, and the poisoned samples are correctly labeled.

Backdoor Attack Sentence

Interpreting Deep Learning Models in Natural Language Processing: A Review

no code implementations20 Oct 2021 Xiaofei Sun, Diyi Yang, Xiaoya Li, Tianwei Zhang, Yuxian Meng, Han Qiu, Guoyin Wang, Eduard Hovy, Jiwei Li

Neural network models have achieved state-of-the-art performances in a wide range of natural language processing (NLP) tasks.

GNN-LM: Language Modeling based on Global Contexts via GNN

1 code implementation ICLR 2022 Yuxian Meng, Shi Zong, Xiaoya Li, Xiaofei Sun, Tianwei Zhang, Fei Wu, Jiwei Li

Inspired by the notion that ``{\it to copy is easier than to memorize}``, in this work, we introduce GNN-LM, which extends the vanilla neural language model (LM) by allowing to reference similar contexts in the entire training corpus.

Language Modelling

Fingerprinting Multi-exit Deep Neural Network Models via Inference Time

no code implementations7 Oct 2021 Tian Dong, Han Qiu, Tianwei Zhang, Jiwei Li, Hewu Li, Jialiang Lu

Specifically, we design an effective method to generate a set of fingerprint samples to craft the inference process with a unique and robust inference time cost as the evidence for model ownership.

BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models

no code implementations ICLR 2022 Kangjie Chen, Yuxian Meng, Xiaofei Sun, Shangwei Guo, Tianwei Zhang, Jiwei Li, Chun Fan

The key feature of our attack is that the adversary does not need prior information about the downstream tasks when implanting the backdoor to the pre-trained model.

Backdoor Attack Transfer Learning

Practical and Private Heterogeneous Federated Learning

no code implementations29 Sep 2021 Hanxiao Chen, Meng Hao, Hongwei Li, Guangxiao Niu, Guowen Xu, Huawei Wang, Yuan Zhang, Tianwei Zhang

Heterogeneous federated learning (HFL) enables clients with different computation/communication capabilities to collaboratively train their own customized models, in which the knowledge of models is shared via clients' predictions on a public dataset.

Federated Learning Privacy Preserving

Towards Robust Point Cloud Models with Context-Consistency Network and Adaptive Augmentation

no code implementations29 Sep 2021 Guanlin Li, Guowen Xu, Han Qiu, Ruan He, Jiwei Li, Tianwei Zhang

Extensive evaluations indicate the integration of the two techniques provides much more robustness than existing defense solutions for 3D models.

Data Augmentation

NASPY: Automated Extraction of Automated Machine Learning Models

no code implementations ICLR 2022 Xiaoxuan Lou, Shangwei Guo, Jiwei Li, Yaoxin Wu, Tianwei Zhang

We present NASPY, an end-to-end adversarial framework to extract the networkarchitecture of deep learning models from Neural Architecture Search (NAS).

BIG-bench Machine Learning Model extraction +1

A Novel Watermarking Framework for Ownership Verification of DNN Architectures

no code implementations29 Sep 2021 Xiaoxuan Lou, Shangwei Guo, Tianwei Zhang, Jiwei Li, Yinqian Zhang, Yang Liu

We present a novel watermarking scheme to achieve the intellectual property (IP) protection and ownership verification of DNN architectures.

Model extraction Neural Architecture Search

Characterization and Prediction of Deep Learning Workloads in Large-Scale GPU Datacenters

1 code implementation3 Sep 2021 Qinghao Hu, Peng Sun, Shengen Yan, Yonggang Wen, Tianwei Zhang

Modern GPU datacenters are critical for delivering Deep Learning (DL) models and services in both the research community and industry.

Management Scheduling

$k$Folden: $k$-Fold Ensemble for Out-Of-Distribution Detection

1 code implementation29 Aug 2021 Xiaoya Li, Jiwei Li, Xiaofei Sun, Chun Fan, Tianwei Zhang, Fei Wu, Yuxian Meng, Jun Zhang

For a task with $k$ training labels, $k$Folden induces $k$ sub-models, each of which is trained on a subset with $k-1$ categories with the left category masked unknown to the sub-model.

Attribute domain classification +4

AcousticFusion: Fusing Sound Source Localization to Visual SLAM in Dynamic Environments

no code implementations3 Aug 2021 Tianwei Zhang, Huayan Zhang, Xiaofei Li, Junfeng Chen, Tin Lun Lam, Sethu Vijayakumar

Dynamic objects in the environment, such as people and other agents, lead to challenges for existing simultaneous localization and mapping (SLAM) approaches.

Depth Estimation Object +1

PoseFusion2: Simultaneous Background Reconstruction and Human Shape Recovery in Real-time

no code implementations2 Aug 2021 Huayan Zhang, Tianwei Zhang, Tin Lun Lam, Sethu Vijayakumar

Dynamic environments that include unstructured moving objects pose a hard problem for Simultaneous Localization and Mapping (SLAM) performance.

Pose Estimation Simultaneous Localization and Mapping

Fingerprinting Generative Adversarial Networks

no code implementations19 Jun 2021 Guanlin Li, Guowen Xu, Han Qiu, Shangwei Guo, Run Wang, Jiwei Li, Tianwei Zhang, Rongxing Lu

In this paper, we present the first fingerprinting scheme for the Intellectual Property (IP) protection of GANs.

Defending Against Backdoor Attacks in Natural Language Generation

1 code implementation3 Jun 2021 Xiaofei Sun, Xiaoya Li, Yuxian Meng, Xiang Ao, Lingjuan Lyu, Jiwei Li, Tianwei Zhang

The frustratingly fragile nature of neural network models make current natural language generation (NLG) systems prone to backdoor attacks and generate malicious sequences that could be sexist or offensive.

Backdoor Attack Dialogue Generation +2

Parameter Estimation for the SEIR Model Using Recurrent Nets

no code implementations30 May 2021 Chun Fan, Yuxian Meng, Xiaofei Sun, Fei Wu, Tianwei Zhang, Jiwei Li

Next, based on this recurrent net that is able to generalize SEIR simulations, we are able to transform the objective to a differentiable one with respect to $\Theta_\text{SEIR}$, and straightforwardly obtain its optimal value.

Modeling Text-visual Mutual Dependency for Multi-modal Dialog Generation

1 code implementation30 May 2021 Shuhe Wang, Yuxian Meng, Xiaofei Sun, Fei Wu, Rongbin Ouyang, Rui Yan, Tianwei Zhang, Jiwei Li

Specifically, we propose to model the mutual dependency between text-visual features, where the model not only needs to learn the probability of generating the next dialog utterance given preceding dialog utterances and visual contexts, but also the probability of predicting the visual features in which a dialog utterance takes place, leading the generated dialog utterance specific to the visual context.

Fast Nearest Neighbor Machine Translation

1 code implementation Findings (ACL) 2022 Yuxian Meng, Xiaoya Li, Xiayu Zheng, Fei Wu, Xiaofei Sun, Tianwei Zhang, Jiwei Li

Fast $k$NN-MT constructs a significantly smaller datastore for the nearest neighbor search: for each word in a source sentence, Fast $k$NN-MT first selects its nearest token-level neighbors, which is limited to tokens that are the same as the query token.

Machine Translation NMT +2

Sentence Similarity Based on Contexts

no code implementations17 May 2021 Xiaofei Sun, Yuxian Meng, Xiang Ao, Fei Wu, Tianwei Zhang, Jiwei Li, Chun Fan

The proposed framework is based on the core idea that the meaning of a sentence should be defined by its contexts, and that sentence similarity can be measured by comparing the probabilities of generating two sentences given the same context.

Language Modelling Semantic Similarity +3

A search for cloud cores affected by shocked carbon chain chemistry in L1251

no code implementations11 Mar 2021 Xunchuan Liu, Y. Wu, C. Zhang, X. Chen, L. -H. Lin, S. -L. Qin, T. Liu, C. Henkel, J. Wang, H. -L. Liu, J. Yuan, L. -X. Yuan, J. Li, Z. -Q. Shen, D. Li, J. Esimbek, K. Wang, L. -X. Li, Kee-Tae Kim, L. Zhu, D. Madones, N. Inostroza, F. -Y. Meng, Tianwei Zhang, K. Tatematsu, Y. Xu, B. -G. Ju, A. Kraus, F. -W. Xu

The signposts of ongoing SCCC and the broadened line widths of C$_3$S and C$_4$H in L1251-1 as well as the distribution of HC$_3$N are also related to outflow activities in this region.

Astrophysics of Galaxies Solar and Stellar Astrophysics

Local Black-box Adversarial Attacks: A Query Efficient Approach

no code implementations4 Jan 2021 Tao Xiang, Hangcheng Liu, Shangwei Guo, Tianwei Zhang, Xiaofeng Liao

Based on this property, we identify the discriminative areas of a given clean example easily for local perturbations.

DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

no code implementations13 Dec 2020 Han Qiu, Yi Zeng, Shangwei Guo, Tianwei Zhang, Meikang Qiu, Bhavani Thuraisingham

In this paper, we investigate the effectiveness of data augmentation techniques in mitigating backdoor attacks and enhancing DL models' robustness.

Backdoor Attack Data Augmentation

FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation Techniques

1 code implementation3 Dec 2020 Han Qiu, Yi Zeng, Tianwei Zhang, Yong Jiang, Meikang Qiu

With more and more advanced adversarial attack methods have been developed, a quantity of corresponding defense solutions were designed to enhance the robustness of DNN models.

Adversarial Attack Data Augmentation

Privacy-preserving Collaborative Learning with Automatic Transformation Search

3 code implementations CVPR 2021 Wei Gao, Shangwei Guo, Tianwei Zhang, Han Qiu, Yonggang Wen, Yang Liu

Comprehensive evaluations demonstrate that the policies discovered by our method can defeat existing reconstruction attacks in collaborative learning, with high efficiency and negligible impact on the model performance.

Data Augmentation Privacy Preserving

RigidFusion: Robot Localisation and Mapping in Environments with Large Dynamic Rigid Objects

no code implementations21 Oct 2020 Ran Long, Christian Rauch, Tianwei Zhang, Vladimir Ivan, Sethu Vijayakumar

Here, we propose to treat all dynamic parts as one rigid body and simultaneously segment and track both static and dynamic components.

Robotics

SplitFusion: Simultaneous Tracking and Mapping for Non-Rigid Scenes

no code implementations4 Jul 2020 Yang Li, Tianwei Zhang, Yoshihiko Nakamura, Tatsuya Harada

We present SplitFusion, a novel dense RGB-D SLAM framework that simultaneously performs tracking and dense reconstruction for both rigid and non-rigid components of the scene.

Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?

1 code implementation26 Jun 2020 Kaidi Jin, Tianwei Zhang, Chao Shen, Yufei Chen, Ming Fan, Chenhao Lin, Ting Liu

It is unknown whether there are any connections and common characteristics between the defenses against these two attacks.

Adversarial Defense Backdoor Attack

Topology-aware Differential Privacy for Decentralized Image Classification

no code implementations14 Jun 2020 Shangwei Guo, Tianwei Zhang, Guowen Xu, Han Yu, Tao Xiang, Yang Liu

In this paper, we design Top-DP, a novel solution to optimize the differential privacy protection of decentralized image classification systems.

Classification Image Classification

Stealing Deep Reinforcement Learning Models for Fun and Profit

no code implementations9 Jun 2020 Kangjie Chen, Shangwei Guo, Tianwei Zhang, Xiaofei Xie, Yang Liu

This paper presents the first model extraction attack against Deep Reinforcement Learning (DRL), which enables an external adversary to precisely recover a black-box DRL model only from its interaction with the environment.

Imitation Learning Model extraction +2

Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques

1 code implementation27 May 2020 Han Qiu, Yi Zeng, Qinkai Zheng, Tianwei Zhang, Meikang Qiu, Gerard Memmi

Extensive evaluations indicate that our solutions can effectively mitigate all existing standard and advanced attack techniques, and beat 11 state-of-the-art defense solutions published in top-tier conferences over the past 2 years.

Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning

no code implementations14 May 2020 Jianwen Sun, Tianwei Zhang, Xiaofei Xie, Lei Ma, Yan Zheng, Kangjie Chen, Yang Liu

Adversarial attacks against conventional Deep Learning (DL) systems and algorithms have been widely studied, and various defenses were proposed.

Adversarial Attack reinforcement-learning +1

Learning to Optimize Non-Rigid Tracking

no code implementations CVPR 2020 Yang Li, Aljaž Božič, Tianwei Zhang, Yanli Ji, Tatsuya Harada, Matthias Nießner

One of the widespread solutions for non-rigid tracking has a nested-loop structure: with Gauss-Newton to minimize a tracking objective in the outer loop, and Preconditioned Conjugate Gradient (PCG) to solve a sparse linear system in the inner loop.

FlowFusion: Dynamic Dense RGB-D SLAM Based on Optical Flow

no code implementations11 Mar 2020 Tianwei Zhang, Huayan Zhang, Yang Li, Yoshihiko Nakamura, Lei Zhang

Dynamic environments are challenging for visual SLAM since the moving objects occlude the static environment features and lead to wrong camera motion estimation.

Motion Estimation Optical Flow Estimation +1

Byzantine-resilient Decentralized Stochastic Gradient Descent

no code implementations20 Feb 2020 Shangwei Guo, Tianwei Zhang, Han Yu, Xiaofei Xie, Lei Ma, Tao Xiang, Yang Liu

It guarantees that each benign node in a decentralized system can train a correct model under very strong Byzantine attacks with an arbitrary number of faulty nodes.

Edge-computing Image Classification

VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting

no code implementations9 Aug 2018 Zecheng He, Tianwei Zhang, Ruby B. Lee

Even small weight changes can be clearly reflected in the model outputs, and observed by the customer.

Privacy-preserving Machine Learning through Data Obfuscation

no code implementations5 Jul 2018 Tianwei Zhang, Zecheng He, Ruby B. Lee

While it is prevalent to outsource model training and serving tasks in the cloud, it is important to protect the privacy of sensitive samples in the training dataset and prevent information leakage to untrusted third parties.

BIG-bench Machine Learning Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.