no code implementations • EMNLP 2020 • Shen Wang, Xiaokai Wei, Cicero Nogueira dos santos, Zhiguo Wang, Ramesh Nallapati, Andrew Arnold, Bing Xiang, Philip S. Yu
Existing knowledge graph embedding approaches concentrate on modeling symmetry/asymmetry, inversion, and composition typed relations but overlook the hierarchical nature of relations.
no code implementations • 24 Apr 2024 • Haifeng Qian, Sujan Kumar Gonugondla, Sungsoo Ha, Mingyue Shang, Sanjay Krishna Gouda, Ramesh Nallapati, Sudipta Sengupta, Xiaofei Ma, Anoop Deoras
Speculative decoding has emerged as a powerful method to improve latency and throughput in hosting large language models.
no code implementations • 13 Mar 2024 • Ben Athiwaratkun, Sujan Kumar Gonugondla, Sanjay Krishna Gouda, Haifeng Qian, Hantian Ding, Qing Sun, Jun Wang, Jiacheng Guo, Liangfu Chen, Parminder Bhatia, Ramesh Nallapati, Sudipta Sengupta, Bing Xiang
This study introduces bifurcated attention, a method designed to enhance language model inference in shared-context batch decoding scenarios.
no code implementations • 13 Mar 2024 • Ben Athiwaratkun, Shiqi Wang, Mingyue Shang, Yuchen Tian, Zijian Wang, Sujan Kumar Gonugondla, Sanjay Krishna Gouda, Rob Kwiatowski, Ramesh Nallapati, Bing Xiang
Generative models, widely utilized in various applications, can often struggle with prompts corresponding to partial tokens.
no code implementations • 2 Feb 2024 • Dejiao Zhang, Wasi Ahmad, Ming Tan, Hantian Ding, Ramesh Nallapati, Dan Roth, Xiaofei Ma, Bing Xiang
Recent studies have shown that code language models at scale demonstrate significant performance gains on downstream tasks, i. e., code generation.
no code implementations • 5 Jul 2023 • Prateek Yadav, Qing Sun, Hantian Ding, Xiaopeng Li, Dejiao Zhang, Ming Tan, Xiaofei Ma, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Mohit Bansal, Bing Xiang
Large-scale code generation models such as Codex and CodeT5 have achieved impressive performance.
1 code implementation • 20 Dec 2022 • Yangruibo Ding, Zijian Wang, Wasi Uddin Ahmad, Murali Krishna Ramanathan, Ramesh Nallapati, Parminder Bhatia, Dan Roth, Bing Xiang
While pre-trained language models (LM) for code have achieved great success in code completion, they generate code conditioned only on the contents within the file, i. e., in-file context, but ignore the rich semantics in other files within the same project, i. e., cross-file context, a critical source of information that is especially useful in modern modular software development.
2 code implementations • 20 Dec 2022 • Shiqi Wang, Zheng Li, Haifeng Qian, Chenghao Yang, Zijian Wang, Mingyue Shang, Varun Kumar, Samson Tan, Baishakhi Ray, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Dan Roth, Bing Xiang
Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation.
2 code implementations • 26 Oct 2022 • Ben Athiwaratkun, Sanjay Krishna Gouda, Zijian Wang, Xiaopeng Li, Yuchen Tian, Ming Tan, Wasi Uddin Ahmad, Shiqi Wang, Qing Sun, Mingyue Shang, Sujan Kumar Gonugondla, Hantian Ding, Varun Kumar, Nathan Fulton, Arash Farahani, Siddhartha Jain, Robert Giaquinto, Haifeng Qian, Murali Krishna Ramanathan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, Bing Xiang
Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings.
1 code implementation • 3 Oct 2022 • Nihal Jain, Dejiao Zhang, Wasi Uddin Ahmad, Zijian Wang, Feng Nan, Xiaopeng Li, Ming Tan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Xiaofei Ma, Bing Xiang
Specifically, we attain $44\%$ relative improvement on the Semantic Textual Similarity tasks and $34\%$ on Code-to-Code Search tasks.
no code implementations • 28 Sep 2022 • Jun Wang, Patrick Ng, Alexander Hanbo Li, Jiarong Jiang, Zhiguo Wang, Ramesh Nallapati, Bing Xiang, Sudipta Sengupta
When synthesizing a SQL query, there is no explicit semantic information of NLQ available to the parser which leads to undesirable generalization performance.
1 code implementation • Findings (NAACL) 2022 • Arthur Bražinskas, Ramesh Nallapati, Mohit Bansal, Markus Dreyer
In the same vein, we pre-train the adapters in a query-based manner on customer reviews and then fine-tune them on annotated datasets.
2 code implementations • ACL 2022 • Zheng Li, Zijian Wang, Ming Tan, Ramesh Nallapati, Parminder Bhatia, Andrew Arnold, Bing Xiang, Dan Roth
Empirical analyses show that, despite the challenging nature of generative tasks, we were able to achieve a 16. 5x model footprint compression ratio with little performance drop relative to the full-precision counterparts on multiple summarization and QA datasets.
1 code implementation • EMNLP 2021 • Dejiao Zhang, Shang-Wen Li, Wei Xiao, Henghui Zhu, Ramesh Nallapati, Andrew O. Arnold, Bing Xiang
Many recent successes in sentence representation learning have been achieved by simply fine-tuning on the Natural Language Inference (NLI) datasets with triplet loss or siamese loss.
1 code implementation • ACL 2021 • Feng Nan, Cicero Nogueira dos santos, Henghui Zhu, Patrick Ng, Kathleen McKeown, Ramesh Nallapati, Dejiao Zhang, Zhiguo Wang, Andrew O. Arnold, Bing Xiang
A commonly observed problem with the state-of-the art abstractive summarization models is that the generated summaries can be factually inconsistent with the input documents.
no code implementations • 17 Apr 2021 • Arthur Bražinskas, Mengwen Liu, Ramesh Nallapati, Sujith Ravi, Markus Dreyer
This applies to scenarios such as a news publisher training a summarizer on dated news and summarizing incoming recent news.
no code implementations • EACL 2021 • Zhiguo Wang, Patrick Ng, Ramesh Nallapati, Bing Xiang
Experiments show that: (1) Our IR-based retrieval method is able to collect high-quality candidates efficiently, thus enables our method adapt to large-scale KBs easily; (2) the BERT model improves the accuracy across all three sub-tasks; and (3) benefiting from multi-task learning, the unified model obtains further improvements with only 1/3 of the original parameters.
2 code implementations • NAACL 2021 • Dejiao Zhang, Feng Nan, Xiaokai Wei, Shangwen Li, Henghui Zhu, Kathleen McKeown, Ramesh Nallapati, Andrew Arnold, Bing Xiang
Unsupervised clustering aims at discovering the semantic categories of data according to some distance measured in the representation space.
Ranked #1 on Short Text Clustering on AG News
1 code implementation • EACL 2021 • Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, Bing Xiang
A key challenge for abstractive summarization is ensuring factual consistency of the generated summary with respect to the original document.
1 code implementation • ACL 2021 • Yifan Gao, Henghui Zhu, Patrick Ng, Cicero Nogueira dos santos, Zhiguo Wang, Feng Nan, Dejiao Zhang, Ramesh Nallapati, Andrew O. Arnold, Bing Xiang
When multiple plausible answers are found, the system should rewrite the question for each answer to resolve the ambiguity.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Dejiao Zhang, Ramesh Nallapati, Henghui Zhu, Feng Nan, Cicero Nogueira dos santos, Kathleen McKeown, Bing Xiang
Unsupervised domain adaptation addresses the problem of leveraging labeled data in a source domain to learn a well-performing model in a target domain where labels are unavailable.
Cross-Lingual Document Classification Document Classification +2
1 code implementation • EMNLP 2020 • Siamak Shakeri, Cicero Nogueira dos santos, Henry Zhu, Patrick Ng, Feng Nan, Zhiguo Wang, Ramesh Nallapati, Bing Xiang
Our model comprises a single transformer-based encoder-decoder network that is trained end-to-end to generate both answers and questions.
no code implementations • EMNLP 2020 • Cicero Nogueira dos santos, Xiaofei Ma, Ramesh Nallapati, Zhiheng Huang, Bing Xiang
Generative models for Information Retrieval, where ranking of documents is viewed as the task of generating a query from a document's language model, were very successful in various IR tasks in the past.
1 code implementation • 22 Sep 2020 • Davis Liang, Peng Xu, Siamak Shakeri, Cicero Nogueira dos Santos, Ramesh Nallapati, Zhiheng Huang, Bing Xiang
In some cases, our model trained on synthetic data can even outperform the same model trained on real data
no code implementations • 17 Jul 2020 • Parminder Bhatia, Lan Liu, Kristjan Arumae, Nima Pourdamghani, Suyog Deshpande, Ben Snively, Mona Mona, Colby Wise, George Price, Shyam Ramaswamy, Xiaofei Ma, Ramesh Nallapati, Zhiheng Huang, Bing Xiang, Taha Kass-Hout
Coronavirus disease (COVID-19) has been declared as a pandemic by WHO with thousands of cases being reported each day.
1 code implementation • ACL 2020 • Alexander R. Fabbri, Patrick Ng, Zhiguo Wang, Ramesh Nallapati, Bing Xiang
Training a QA model on this data gives a relative improvement over a previous unsupervised model in F1 score on the SQuAD dataset by about 14%, and 20% when the answer is a named entity, achieving state-of-the-art performance on SQuAD for unsupervised QA.
1 code implementation • 25 Nov 2019 • Henghui Zhu, Feng Nan, Zhiguo Wang, Ramesh Nallapati, Bing Xiang
In this work, we define the problem of conversation structure modeling as identifying the parent utterance(s) to which each utterance in the conversation responds to.
no code implementations • WS 2019 • Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nallapati, Bing Xiang
The performance of deep neural models can deteriorate substantially when there is a domain shift between training and test data.
no code implementations • 17 Oct 2019 • Xiaofei Ma, Zhiguo Wang, Patrick Ng, Ramesh Nallapati, Bing Xiang
We present a systematic investigation of layer-wise BERT activations for general-purpose text representations to understand what linguistic information they capture and how transferable they are across different tasks.
no code implementations • WS 2019 • Shobhit Jain, Sravan Babu Bodapati, Ramesh Nallapati, Anima Anandkumar
Distributed word embeddings have yielded state-of-the-art performance in many NLP tasks, mainly due to their success in capturing useful semantic information.
no code implementations • IJCNLP 2019 • Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, Bing Xiang
To tackle this issue, we propose a multi-passage BERT model to globally normalize answer scores across all passages of the same question, and this change enables our QA model find better answers by utilizing more passages.
Ranked #3 on Open-Domain Question Answering on SearchQA
1 code implementation • ACL 2019 • Feng Nan, Ran Ding, Ramesh Nallapati, Bing Xiang
To measure the diversity of the produced topics, we propose a simple topic uniqueness metric.
no code implementations • ICLR Workshop LLD 2019 • Peng Xu, Xiaofei Ma, Ramesh Nallapati, Bing Xiang
In this paper, we propose a \textit{weak supervision} framework for neural ranking tasks based on the data programming paradigm \citep{Ratner2016}, which enables us to leverage multiple weak supervision signals from different sources.
no code implementations • CVPR 2019 • Pramuditha Perera, Ramesh Nallapati, Bing Xiang
The key contribution of our work is our proposal to explicitly constrain the latent space to exclusively represent the given class.
Ranked #6 on Anomaly Detection on Hyper-Kvasir Dataset
no code implementations • ICLR Workshop LLD 2019 • Ian Gemp, Ramesh Nallapati, Ran Ding, Feng Nan, Bing Xiang
We extend NTMs to the weakly semi-supervised setting by using informative priors in the training objective.
2 code implementations • EMNLP 2018 • Ran Ding, Ramesh Nallapati, Bing Xiang
Topic models are evaluated based on their ability to describe documents well (i. e. low perplexity) and to produce topics that carry coherent semantic meaning.
no code implementations • 1 Aug 2017 • Ramesh Nallapati, Igor Melnyk, Abhishek Kumar, Bo-Wen Zhou
We present a new topic model that generates documents by sampling a topic for one whole sentence at a time, and generating the words in the sentence using an RNN decoder that is conditioned on the topic of the sentence.
7 code implementations • 14 Nov 2016 • Ramesh Nallapati, FeiFei Zhai, Bo-Wen Zhou
We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art.
Ranked #8 on Text Summarization on CNN / Daily Mail (Anonymized)
no code implementations • 14 Nov 2016 • Ramesh Nallapati, Bo-Wen Zhou, Mingbo Ma
The Selector architecture, on the other hand, is free to pick one sentence at a time in any arbitrary order to piece together the summary.
no code implementations • ACL 2016 • Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bo-Wen Zhou, Yoshua Bengio
At each time-step, the decision of which softmax layer to use choose adaptively made by an MLP which is conditioned on the context.~We motivate our work from a psychological evidence that humans naturally have a tendency to point towards objects in the context or the environment when the name of an object is not known.~We observe improvements on two tasks, neural machine translation on the Europarl English to French parallel corpora and text summarization on the Gigaword dataset using our proposed model.
4 code implementations • CONLL 2016 • Ramesh Nallapati, Bo-Wen Zhou, Cicero Nogueira dos santos, Caglar Gulcehre, Bing Xiang
In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora.
Ranked #10 on Text Summarization on DUC 2004 Task 1