no code implementations • EMNLP 2021 • David Lowell, Brian E. Howard, Zachary C. Lipton, Byron C. Wallace
Unsupervised Data Augmentation (UDA) is a semi-supervised technique that applies a consistency loss to penalize differences between a model's predictions on (a) observed (unlabeled) examples; and (b) corresponding 'noised' examples produced via data augmentation.
no code implementations • IJCNLP 2019 • David Lowell, Zachary C. Lipton, Byron C. Wallace
Active learning (AL) is a widely-used training strategy for maximizing predictive performance subject to a fixed annotation budget.