no code implementations • EMNLP 2020 • Dhanasekar Sundararaman, Shijing Si, Vivek Subramanian, Guoyin Wang, Devamanyu Hazarika, Lawrence Carin
We propose a new methodology to assign and learn embeddings for numbers.
no code implementations • Findings (EMNLP) 2021 • Dhanasekar Sundararaman, Henry Tsai, Kuang-Huei Lee, Iulia Turc, Lawrence Carin
It has been shown that training multi-task models with auxiliary tasks can improve the target task quality through cross-task transfer.
no code implementations • 7 May 2022 • Dhanasekar Sundararaman, Vivek Subramanian, Guoyin Wang, Liyan Xu, Lawrence Carin
Numbers are essential components of text, like any other word tokens, from which natural language processing (NLP) models are built and deployed.
1 code implementation • 31 Dec 2021 • Vivek Subramanian, Dhanasekar Sundararaman
Neural machine translation (NMT) systems aim to map text from one language into another.
no code implementations • 10 Nov 2019 • Dhanasekar Sundararaman, Vivek Subramanian, Guoyin Wang, Shijing Si, Dinghan Shen, Dong Wang, Lawrence Carin
Attention-based models have shown significant improvement over traditional algorithms in several NLP tasks.
1 code implementation • ACL 2019 • Dinghan Shen, Pengyu Cheng, Dhanasekar Sundararaman, Xinyuan Zhang, Qian Yang, Meng Tang, Asli Celikyilmaz, Lawrence Carin
Vector representations of sentences, trained on massive text corpora, are widely used as generic sentence embeddings across a variety of NLP problems.
no code implementations • 19 Nov 2017 • Nabarun Pal, Priya Arora, Dhanasekar Sundararaman, Puneet Kohli, Sai Sumanth Palakurthy
Cars are being sold more than ever.