no code implementations • EMNLP 2020 • Dhanasekar Sundararaman, Shijing Si, Vivek Subramanian, Guoyin Wang, Devamanyu Hazarika, Lawrence Carin
We propose a new methodology to assign and learn embeddings for numbers.
no code implementations • Findings (EMNLP) 2021 • Dhanasekar Sundararaman, Henry Tsai, Kuang-Huei Lee, Iulia Turc, Lawrence Carin
It has been shown that training multi-task models with auxiliary tasks can improve the target task quality through cross-task transfer.
no code implementations • 17 Oct 2022 • Dhanasekar Sundararaman, Nikhil Mehta, Lawrence Carin
The model is fine-tuned by introducing a new regularization loss that separates the embeddings of IND and OOD data, which leads to significant gains on the OOD prediction task during testing.
no code implementations • 2 Aug 2022 • Dhanasekar Sundararaman, Vivek Subramanian
We definitively show that pre-trained models for IR do not perform well in zero-shot retrieval tasks when full fine-tuning of a large pre-trained BERT encoder is performed and that lightweight fine-tuning performed with adapter networks improves zero-shot retrieval performance almost by 20% over baseline.
Cultural Vocal Bursts Intensity Prediction Information Retrieval +1
no code implementations • 7 May 2022 • Dhanasekar Sundararaman, Vivek Subramanian, Guoyin Wang, Liyan Xu, Lawrence Carin
Numbers are essential components of text, like any other word tokens, from which natural language processing (NLP) models are built and deployed.
1 code implementation • 31 Dec 2021 • Vivek Subramanian, Dhanasekar Sundararaman
Neural machine translation (NMT) systems aim to map text from one language into another.
no code implementations • 10 Nov 2019 • Dhanasekar Sundararaman, Vivek Subramanian, Guoyin Wang, Shijing Si, Dinghan Shen, Dong Wang, Lawrence Carin
Attention-based models have shown significant improvement over traditional algorithms in several NLP tasks.
1 code implementation • ACL 2019 • Dinghan Shen, Pengyu Cheng, Dhanasekar Sundararaman, Xinyuan Zhang, Qian Yang, Meng Tang, Asli Celikyilmaz, Lawrence Carin
Vector representations of sentences, trained on massive text corpora, are widely used as generic sentence embeddings across a variety of NLP problems.
no code implementations • 19 Nov 2017 • Nabarun Pal, Priya Arora, Dhanasekar Sundararaman, Puneet Kohli, Sai Sumanth Palakurthy
Cars are being sold more than ever.