Adversarial Text
33 papers with code • 0 benchmarks • 2 datasets
Adversarial Text refers to a specialised text sequence that is designed specifically to influence the prediction of a language model. Generally, Adversarial Text attack are carried out on Large Language Models (LLMs). Research on understanding different adversarial approaches can help us build effective defense mechanisms to detect malicious text input and build robust language models.
Benchmarks
These leaderboards are used to track progress in Adversarial Text
Libraries
Use these libraries to find Adversarial Text models and implementationsLatest papers
Arabic Synonym BERT-based Adversarial Examples for Text Classification
To evaluate the grammatical and semantic similarities of the newly produced adversarial examples using our synonym BERT-based attack, we invite four human evaluators to assess and compare the produced adversarial examples with their original examples.
RETSim: Resilient and Efficient Text Similarity
This paper introduces RETSim (Resilient and Efficient Text Similarity), a lightweight, multilingual deep learning model trained to produce robust metric embeddings for near-duplicate text retrieval, clustering, and dataset deduplication tasks.
VoteTRANS: Detecting Adversarial Text without Training by Voting on Hard Labels of Transformations
Specifically, VoteTRANS detects adversarial text by comparing the hard labels of input text and its transformation.
Less is More: Removing Text-regions Improves CLIP Training Efficiency and Robustness
In this paper, we discuss two effective approaches to improve the efficiency and robustness of CLIP training: (1) augmenting the training dataset while maintaining the same number of optimization steps, and (2) filtering out samples that contain text regions in the image.
A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion
In this work, we study the problem of adversarial attack generation for Stable Diffusion and ask if an adversarial text prompt can be obtained even in the absence of end-to-end model queries.
Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process
In response, this study proposes a new method called the Fraud's Bargain Attack (FBA), which uses a randomization mechanism to expand the search space and produce high-quality adversarial examples with a higher probability of success.
RETVec: Resilient and Efficient Text Vectorizer
The RETVec embedding model is pre-trained using pair-wise metric learning to be robust against typos and character-level adversarial attacks.
Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks
We propose a novel gradient-based attack against transformer-based language models that searches for an adversarial example in a continuous space of token probabilities.
RIATIG: Reliable and Imperceptible Adversarial Text-to-Image Generation With Natural Prompts
The field of text-to-image generation has made remarkable strides in creating high-fidelity and photorealistic images.
Ignore Previous Prompt: Attack Techniques For Language Models
Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications.