no code implementations • 22 May 2023 • Karthikeyan K, Yogarshi Vyas, Jie Ma, Giovanni Paolini, Neha Anna John, Shuai Wang, Yassine Benajiba, Vittorio Castelli, Dan Roth, Miguel Ballesteros
We experiment with 6 diverse datasets and show that PLM consistently performs better than most other approaches (0. 5 - 2. 5 F1), including in novel settings for taxonomy expansion not considered in prior work.
no code implementations • 24 Mar 2022 • Karthikeyan K, Shaily Bhatt, Pankaj Singh, Somak Aditya, Sandipan Dandapat, Sunayana Sitaram, Monojit Choudhury
We compare the TEA CheckLists with CheckLists created with different levels of human intervention.
no code implementations • 8 Nov 2021 • Karthikeyan K, Anders Søgaard
Several instance-based explainability methods for finding influential training examples for test-time decisions have been proposed recently, including Influence Functions, TraceIn, Representer Point Selection, Grad-Dot, and Grad-Cos.
1 code implementation • EMNLP (MRL) 2021 • Karthikeyan K, Aalok Sathe, Somak Aditya, Monojit Choudhury
Multilingual language models achieve impressive zero-shot accuracies in many languages in complex tasks such as Natural Language Inference (NLI).
no code implementations • Findings of the Association for Computational Linguistics 2020 • Zihan Wang, Karthikeyan K, Stephen Mayhew, Dan Roth
Multilingual BERT (M-BERT) has been a huge success in both supervised and zero-shot cross-lingual transfer learning.
no code implementations • ICLR 2020 • Karthikeyan K, Zihan Wang, Stephen Mayhew, Dan Roth
Recent work has exhibited the surprising cross-lingual abilities of multilingual BERT (M-BERT) -- surprising since it is trained without any cross-lingual objective and with no aligned data.
no code implementations • 12 Dec 2019 • Karthikeyan K, Shubham Kumar Bharti, Piyush Rai
Despite the effectiveness of multitask deep neural network (MTDNN), there is a limited theoretical understanding on how the information is shared across different tasks in MTDNN.