no code implementations • 7 Dec 2022 • Lakshya Kumar, Sreekanth Vempati
For both the tasks, we collect an evaluation data from the fashion e-commerce platform and observe that XLNET model outperform other variants with a MRR of 0. 5 for NPR and NDCG of 0. 634 for CR.
no code implementations • 20 Oct 2022 • Diddigi Raghu Ram Bharadwaj, Lakshya Kumar, Saif Jawaid, Sreekanth Vempati
User activities in a session can be classified into two groups: Known Intent and Unknown intent.
no code implementations • 30 Jun 2022 • Lakshya Kumar, Sagnik Sarkar
Our experiments indicate that the RoBERTa model fine-tuned with an NDCG based surrogate loss function(approxNDCG) achieves an NDCG improvement of 13. 9% compared to other popular listwise loss functions like ListNET and ListMLE.
no code implementations • 17 Jul 2021 • Lakshya Kumar, Sagnik Sarkar
For the product retrieval task, RoBERTa model is able to outperform other two models with an improvement of 164. 7% in Precision@50 and 145. 3% in Recall@50.
no code implementations • 6 Jul 2020 • Shreyas Mangalgi, Lakshya Kumar, Ravindra Babu Tallamraju
Once pre-trained, the RoBERTa model can be fine-tuned for various downstream tasks in supply chain like pincode suggestion and geo-coding.
no code implementations • WS 2019 • Abhijeet Dubey, Lakshya Kumar, Arpan Somani, Aditya Joshi, Pushpak Bhattacharyya
Initially, to get an insight into the problem, we implement a rule-based and a statistical machine learning-based (ML) classifier.
no code implementations • 6 Sep 2017 • Lakshya Kumar, Arpan Somani, Pushpak Bhattacharyya
We analyze the challenges of the problem, and present Rule-based, Machine Learning and Deep Learning approaches to detect sarcasm in numerical portions of text.
no code implementations • EMNLP 2017 • Raksha Sharma, Arpan Somani, Lakshya Kumar, Pushpak Bhattacharyya
Identification of intensity ordering among polar (positive or negative) words which have the same semantics can lead to a fine-grained sentiment analysis.