no code implementations • NAACL (SMM4H) 2021 • Adarsh Kumar, Ojasv Kamal, Susmita Mazumdar
In this paper, we describe our system entry for Shared Task 8 at SMM4H-2021, which is on automatic classification of self-reported breast cancer posts on Twitter.
no code implementations • 18 Apr 2023 • Adarsh Kumar, Pedro Sarmento
Subword tokenization has been widely successful in text-based natural language processing (NLP) tasks with Transformer-based models.
no code implementations • 10 Feb 2023 • Pedro Sarmento, Adarsh Kumar, Yu-Hua Chen, CJ Carr, Zack Zukowski, Mathieu Barthet
We trained a BERT model for downstream genre classification and used it to assess the results obtained with the genre-CTRL model.
1 code implementation • 9 May 2022 • Punyajoy Saha, Kanishk Singh, Adarsh Kumar, Binny Mathew, Animesh Mukherjee
We generate counterspeech using three datasets and observe significant improvement across different attribute scores.
1 code implementation • 20 Nov 2021 • Adarsh Kumar, Kausik Subramanian, Shivaram Venkataraman, Aditya Akella
This simultaneously reduces network bandwidth, compute utilization, and memory footprint while preserving model quality.
1 code implementation • 30 Jul 2021 • Pedro Sarmento, Adarsh Kumar, CJ Carr, Zack Zukowski, Mathieu Barthet, Yi-Hsuan Yang
In this work, we present DadaGP, a new symbolic music dataset comprising 26, 181 song scores in the GuitarPro format covering 739 musical genres, along with an accompanying tokenized format well-suited for generative sequence models such as the Transformer.
1 code implementation • 28 Jul 2021 • Manmeet Singh, Chirag Dhara, Adarsh Kumar, Sukhpal Singh Gill, Steve Uhlig
Climate change has become one of the biggest global problems increasingly compromising the Earth's habitability.
1 code implementation • EACL (WASSA) 2021 • Yash Butala, Kanishk Singh, Adarsh Kumar, Shrey Shrivastava
We describe our system entry for the WASSA 2021 Shared Task (for both Track-1 and Track-2), where we leveraged the information from Pre-trained language models for Track-specific Tasks.
no code implementations • 18 Jan 2021 • Arjun Balasubramanian, Adarsh Kumar, YuHan Liu, Han Cao, Shivaram Venkataraman, Aditya Akella
We present the design of GATI, an end-to-end prediction serving system that incorporates learned caches for low-latency DNN inference.
1 code implementation • 14 Jan 2021 • Ojasv Kamal, Adarsh Kumar, Tejas Vaidhya
This paper harnesses attention based pre-trained models fine-tuned on Hindi data with Hostile-Non hostile task as Auxiliary and fusing its features for further sub-tasks classification.
1 code implementation • 4 Aug 2020 • Siddhant Garg, Adarsh Kumar, Vibhor Goel, YIngyu Liang
We introduce adversarial perturbations in the model weights using a composite loss on the predictions of the original model and the desired trigger through projected gradient descent.
no code implementations • 7 Feb 2020 • Adarsh Kumar, Peter Ku, Anuj Kumar Goyal, Angeliki Metallinou, Dilek Hakkani-Tur
Task oriented dialog agents provide a natural language interface for users to complete their goal.
no code implementations • LREC 2018 • Adarsh Kumar, Sandipan Dandapat, Sushil Chordia
For example, for the user entered query "capital of USA", the most probable question intent is "What's the capital of USA?".
no code implementations • 7 Feb 2020 • Adarsh Kumar, Arjun Balasubramanian, Shivaram Venkataraman, Aditya Akella
In this work, we observe that caching intermediate layer outputs can help us avoid running all the layers of a DNN for a sizeable fraction of inference requests.
6 code implementations • LREC 2020 • Mihail Eric, Rahul Goel, Shachi Paul, Adarsh Kumar, Abhishek Sethi, Peter Ku, Anuj Kumar Goyal, Sanchit Agarwal, Shuyang Gao, Dilek Hakkani-Tur
To fix the noisy state annotations, we use crowdsourced workers to re-annotate state and utterances based on the original utterances in the dataset.
Ranked #16 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.0
Dialogue State Tracking Multi-domain Dialogue State Tracking