no code implementations • EMNLP 2020 • Dhanasekar Sundararaman, Shijing Si, Vivek Subramanian, Guoyin Wang, Devamanyu Hazarika, Lawrence Carin
We propose a new methodology to assign and learn embeddings for numbers.
no code implementations • 2 Aug 2022 • Dhanasekar Sundararaman, Vivek Subramanian
We definitively show that pre-trained models for IR do not perform well in zero-shot retrieval tasks when full fine-tuning of a large pre-trained BERT encoder is performed and that lightweight fine-tuning performed with adapter networks improves zero-shot retrieval performance almost by 20% over baseline.
Cultural Vocal Bursts Intensity Prediction Information Retrieval +1
no code implementations • 7 May 2022 • Dhanasekar Sundararaman, Vivek Subramanian, Guoyin Wang, Liyan Xu, Lawrence Carin
Numbers are essential components of text, like any other word tokens, from which natural language processing (NLP) models are built and deployed.
1 code implementation • 31 Dec 2021 • Vivek Subramanian, Dhanasekar Sundararaman
Neural machine translation (NMT) systems aim to map text from one language into another.
no code implementations • NAACL 2021 • Vivek Subramanian, Matthew Engelhard, Sam Berchuck, Liqun Chen, Ricardo Henao, Lawrence Carin
In many natural language processing applications, identifying predictive text can be as important as the predictions themselves.
no code implementations • 23 Aug 2020 • Vivek Subramanian, Joshua Khani
Extracting stimulus features from neuronal ensembles is of great interest to the development of neuroprosthetics that project sensory information directly to the brain via electrical stimulation.
no code implementations • 10 Nov 2019 • Dhanasekar Sundararaman, Vivek Subramanian, Guoyin Wang, Shijing Si, Dinghan Shen, Dong Wang, Lawrence Carin
Attention-based models have shown significant improvement over traditional algorithms in several NLP tasks.