no code implementations • NAACL (ACL) 2022 • Judith Gaspers, Anoop Kumar, Greg Ver Steeg, Aram Galstyan
Spoken Language Understanding (SLU) models in industry applications are usually trained offline on historic data, but have to perform well on incoming user requests after deployment.
no code implementations • 25 Jun 2024 • Jinghan Jia, Abi Komma, Timothy Leffel, Xujun Peng, Ajay Nagesh, Tamer Soliman, Aram Galstyan, Anoop Kumar
In task-oriented conversational AI evaluation, unsupervised methods poorly correlate with human judgments, and supervised approaches lack generalization.
no code implementations • 30 May 2024 • Anoop Kumar, Madan Mohan Tito Ayyalasomayajula, Dheerendra Panwar, Yeshwanth Vasa
With a particular focus on Scipy's minimize function the eclipse mapping method is thoroughly researched and implemented utilizing Python and essential libraries.
no code implementations • 31 Mar 2024 • Anoop Kumar, Suresh Dodda, Navin Kamuni, Rajeev Kumar Arora
Results indicate that gradient boosting is a useful tool for predicting fund returns; for example, a 1% increase in interest rates causes an actively managed fund's return to decrease by -11. 97%.
no code implementations • 31 Mar 2024 • Anoop Kumar, Suresh Dodda, Navin Kamuni, Venkata Sai Mahesh Vuppalapati
However, at that time, players were unaware of the significant impact that playtime could have on their feelings.
no code implementations • 24 Feb 2024 • Yao Qiang, Subhrangshu Nandi, Ninareh Mehrabi, Greg Ver Steeg, Anoop Kumar, Anna Rumshisky, Aram Galstyan
However, their performance on sequence labeling tasks such as intent classification and slot filling (IC-SF), which is a central component in personal assistant systems, lags significantly behind discriminative models.
no code implementations • 26 May 2023 • Neal Lawton, Anoop Kumar, Govind Thattai, Aram Galstyan, Greg Ver Steeg
Parameter-efficient tuning (PET) methods fit pre-trained language models (PLMs) to downstream tasks by either computing a small compressed update for a subset of model parameters, or appending and fine-tuning a small number of new model parameters to the pre-trained network.
1 code implementation • 26 May 2023 • Kuan-Hao Huang, Varun Iyer, I-Hung Hsu, Anoop Kumar, Kai-Wei Chang, Aram Galstyan
Paraphrase generation is a long-standing task in natural language processing (NLP).
no code implementations • 18 May 2023 • Arghya Datta, Subhrangshu Nandi, Jingcheng Xu, Greg Ver Steeg, He Xie, Anoop Kumar, Aram Galstyan
We formulate the model stability problem by studying how the predictions of a model change, even when it is retrained on the same data, as a consequence of stochasticity in the training process.
no code implementations • 2 Nov 2022 • Kuan-Hao Huang, Varun Iyer, Anoop Kumar, Sriram Venkatapathy, Kai-Wei Chang, Aram Galstyan
In this paper, we demonstrate that leveraging Abstract Meaning Representations (AMR) can greatly improve the performance of unsupervised syntactically controlled paraphrase generation.
no code implementations • EMNLP (insights) 2020 • Ansel MacLaughlin, Jwala Dhamala, Anoop Kumar, Sriram Venkatapathy, Ragav Venkatesan, Rahul Gupta
Neural Architecture Search (NAS) methods, which automatically learn entire neural model or individual neural cell architectures, have recently achieved competitive or state-of-the-art (SOTA) performance on variety of natural language processing and computer vision tasks, including language modeling, natural language inference, and image classification.
no code implementations • 5 Dec 2017 • Mayank Kejriwal, Jiayuan Ding, Runqi Shao, Anoop Kumar, Pedro Szekely
In this paper, we describe and study the indicator mining problem in the online sex advertising domain.
1 code implementation • 11 Aug 2016 • Kuan Liu, Xing Shi, Anoop Kumar, Linhong Zhu, Prem Natarajan
We present our solution to the job recommendation task for RecSys Challenge 2016.