Search Results for author: Bhavdeep Singh Sachdeva

Found 3 papers, 0 papers with code

Do We Need to Create Big Datasets to Learn a Task?

no code implementations EMNLP (sustainlp) 2020 Swaroop Mishra, Bhavdeep Singh Sachdeva

Since the language models have already been pre-trained with huge amount of data and have basic linguistic knowledge, there is no need to create big datasets to learn a task.

Zero-shot Generalization

Pretrained Transformers Do not Always Improve Robustness

no code implementations14 Oct 2022 Swaroop Mishra, Bhavdeep Singh Sachdeva, Chitta Baral

Pretrained Transformers (PT) have been shown to improve Out of Distribution (OOD) robustness than traditional models such as Bag of Words (BOW), LSTMs, Convolutional Neural Networks (CNN) powered by Word2Vec and Glove embeddings.

Cannot find the paper you are looking for? You can Submit a new open access paper.