15 papers with code • 0 benchmarks • 0 datasets
You can read these blog posts to get an overview of the approaches.
These leaderboards are used to track progress in Text Augmentation
We present EDA: easy data augmentation techniques for boosting performance on text classification tasks.
We stochastically replace words with other words that are predicted by a bi-directional language model at the word positions.
Data augmentation is a ubiquitous technique for increasing the size of labeled training sets by leveraging task-specific data transformations that preserve class labels.
In this paper, we study the problem of data augmentation for language understanding in task-oriented dialogue system.
Thus, when collecting the data about user comments on the social network, the data is usually skewed about one label, which leads the dataset to become imbalanced and deteriorate the model's ability.
Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning
In this work, we propose a simple and effective method to cover a much larger proportion of the attack search space, called Adversarial and Mixup Data Augmentation (AMDA).