no code implementations • COLING 2022 • Aniruddha Roy, Rupak Kumar Thakur, Isha Sharma, Ashim Gupta, Amrith Krishna, Sudeshna Sarkar, Pawan Goyal
Further, we apply the model agnostic meta-learning approach to our base model.
no code implementations • EMNLP 2020 • Amrith Krishna, Ashim Gupta, Deepak Garasangi, Pavankumar Satuluri, Pawan Goyal
We propose a graph-based model for joint morphological parsing and dependency parsing in Sanskrit.
no code implementations • CL (ACL) 2020 • Amrith Krishna, Bishal Santra, Ashim Gupta, Pavankumar Satuluri, Pawan Goyal
Ours is a search-based structured prediction framework, which expects a graph as input, where relevant linguistic information is encoded in the nodes, and the edges are then used to indicate the association between these nodes.
1 code implementation • 16 Nov 2023 • Ashim Gupta, Rishanth Rajendhran, Nathan Stringham, Vivek Srikumar, Ana Marasović
Do larger and more performant models resolve NLP's longstanding robustness issues?
no code implementations • 25 Oct 2023 • Bhavuk Singhal, Ashim Gupta, Shivasankaran V P, Amrith Krishna
Further, the intent classification may be modeled in a multiclass (MC) or multilabel (ML) setup.
no code implementations • 31 May 2023 • Ashim Gupta, Amrith Krishna
Clean-label (CL) attack is a form of data poisoning attack where an adversary modifies only the textual input of the training data, without requiring access to the labeling function.
no code implementations • 25 May 2023 • Ashim Gupta, Carter Wood Blum, Temma Choji, Yingjie Fei, Shalin Shah, Alakananda Vempala, Vivek Srikumar
For example, on sentiment classification using the SST-2 dataset, our method improves the adversarial accuracy over the best existing defense approach by more than 4% with a smaller decrease in task accuracy (0. 5% vs 2. 5%).
1 code implementation • 23 May 2023 • Ayush Maheshwari, Ashim Gupta, Amrith Krishna, Atul Kumar Singh, Ganesh Ramakrishnan, G. Anil Kumar, Jitin Singla
Translation models trained on our dataset demonstrate statistically significant improvements when translating out-of-domain contemporary corpora, outperforming models trained on older classical-era poetry datasets.
1 code implementation • 28 Jul 2021 • Mattia Medina Grespan, Ashim Gupta, Vivek Srikumar
Symbolic knowledge can provide crucial inductive bias for training neural models, especially in low data regimes.
no code implementations • ACL 2021 • Ashim Gupta, Vivek Srikumar
In this work, we introduce X-FACT: the largest publicly available multilingual dataset for factual verification of naturally existing real-world claims.
1 code implementation • EACL 2021 • Jivnesh Sandhan, Amrith Krishna, Ashim Gupta, Laxmidhar Behera, Pawan Goyal
In this work, we focus on dependency parsing for morphological rich languages (MRLs) in a low-resource setting.
1 code implementation • 10 Jan 2021 • Ashim Gupta, Giorgi Kvernadze, Vivek Srikumar
In this paper, we study the response of large models from the BERT family to incoherent inputs that should confuse any model that claims to understand natural language.
1 code implementation • WS 2020 • Ashim Gupta, Amrith Krishna, Pawan Goyal, Oliver Hellwig
Neural sequence labelling approaches have achieved state of the art results in morphological tagging.
no code implementations • 17 Apr 2020 • Amrith Krishna, Ashim Gupta, Deepak Garasangi, Jivnesh Sandhan, Pavankumar Satuluri, Pawan Goyal
We compare the performance of each of the models in a low-resource setting, with 1, 500 sentences for training.
no code implementations • WS 2018 • Malay Pramanick, Ashim Gupta, Pabitra Mitra
In this paper, we propose a method for detection of metaphors at the token level using a hybrid model of Bidirectional-LSTM and CRF.