no code implementations • EMNLP (sustainlp) 2020 • Swaroop Mishra, Bhavdeep Singh Sachdeva
Since the language models have already been pre-trained with huge amount of data and have basic linguistic knowledge, there is no need to create big datasets to learn a task.
no code implementations • 14 Oct 2022 • Swaroop Mishra, Bhavdeep Singh Sachdeva, Chitta Baral
Pretrained Transformers (PT) have been shown to improve Out of Distribution (OOD) robustness than traditional models such as Bag of Words (BOW), LSTMs, Convolutional Neural Networks (CNN) powered by Word2Vec and Glove embeddings.
no code implementations • Findings (ACL) 2022 • Tejas Gokhale, Swaroop Mishra, Man Luo, Bhavdeep Singh Sachdeva, Chitta Baral
However, the effect of data modification on adversarial robustness remains unclear.