2 code implementations • NAACL 2021 • Dejiao Zhang, Feng Nan, Xiaokai Wei, Shangwen Li, Henghui Zhu, Kathleen McKeown, Ramesh Nallapati, Andrew Arnold, Bing Xiang
Unsupervised clustering aims at discovering the semantic categories of data according to some distance measured in the representation space.
Ranked #1 on Short Text Clustering on AG News
1 code implementation • EACL 2021 • Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, Bing Xiang
A key challenge for abstractive summarization is ensuring factual consistency of the generated summary with respect to the original document.
1 code implementation • ACL 2021 • Feng Nan, Cicero Nogueira dos santos, Henghui Zhu, Patrick Ng, Kathleen McKeown, Ramesh Nallapati, Dejiao Zhang, Zhiguo Wang, Andrew O. Arnold, Bing Xiang
A commonly observed problem with the state-of-the art abstractive summarization models is that the generated summaries can be factually inconsistent with the input documents.
1 code implementation • ACL 2019 • Feng Nan, Ran Ding, Ramesh Nallapati, Bing Xiang
To measure the diversity of the produced topics, we propose a simple topic uniqueness metric.
1 code implementation • ACL 2021 • Yifan Gao, Henghui Zhu, Patrick Ng, Cicero Nogueira dos santos, Zhiguo Wang, Feng Nan, Dejiao Zhang, Ramesh Nallapati, Andrew O. Arnold, Bing Xiang
When multiple plausible answers are found, the system should rewrite the question for each answer to resolve the ambiguity.
1 code implementation • 25 Jan 2023 • Kung-Hsiang Huang, Siffi Singh, Xiaofei Ma, Wei Xiao, Feng Nan, Nicholas Dingwall, William Yang Wang, Kathleen McKeown
Missing information is a common issue of dialogue summarization where some information in the reference summaries is not covered in the generated summaries.
1 code implementation • 25 Nov 2019 • Henghui Zhu, Feng Nan, Zhiguo Wang, Ramesh Nallapati, Bing Xiang
In this work, we define the problem of conversation structure modeling as identifying the parent utterance(s) to which each utterance in the conversation responds to.
1 code implementation • EMNLP 2020 • Siamak Shakeri, Cicero Nogueira dos santos, Henry Zhu, Patrick Ng, Feng Nan, Zhiguo Wang, Ramesh Nallapati, Bing Xiang
Our model comprises a single transformer-based encoder-decoder network that is trained end-to-end to generate both answers and questions.
no code implementations • 31 May 2017 • Henghui Zhu, Feng Nan, Ioannis Paschalidis, Venkatesh Saligrama
Deep neural network (DNN) based approaches hold significant potential for reinforcement learning (RL) and have already shown remarkable gains over state-of-art methods in a number of applications.
no code implementations • NeurIPS 2017 • Feng Nan, Venkatesh Saligrama
Our novel bottom-up method first trains a high-accuracy complex model.
no code implementations • 10 May 2017 • Feng Nan, Venkatesh Saligrama
We point out an issue with Theorem 5 appearing in "Group-based active query selection for rapid diagnosis in time-critical situations".
no code implementations • 25 Apr 2017 • Feng Nan, Venkatesh Saligrama
Our objective is to minimize overall average cost without sacrificing accuracy.
no code implementations • NeurIPS 2016 • Feng Nan, Joseph Wang, Venkatesh Saligrama
We propose to prune a random forest (RF) for resource-constrained prediction.
no code implementations • 5 Jan 2016 • Feng Nan, Joseph Wang, Venkatesh Saligrama
We propose a novel 0-1 integer program formulation for ensemble pruning.
no code implementations • 20 Feb 2015 • Feng Nan, Joseph Wang, Venkatesh Saligrama
We seek decision rules for prediction-time cost reduction, where complete data is available for training, but during prediction-time, each feature can only be acquired for an additional cost.
no code implementations • 12 Jan 2015 • Feng Nan, Joseph Wang, Venkatesh Saligrama
We develop a broad class of \emph{admissible} impurity functions that admit monomials, classes of polynomials, and hinge-loss functions that allow for flexible impurity design with provably optimal approximation bounds.
no code implementations • 27 Apr 2021 • Han-Chin Shing, Chaitanya Shivade, Nima Pourdamghani, Feng Nan, Philip Resnik, Douglas Oard, Parminder Bhatia
The records of a clinical encounter can be extensive and complex, thus placing a premium on tools that can extract and summarize relevant information.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Dejiao Zhang, Ramesh Nallapati, Henghui Zhu, Feng Nan, Cicero Nogueira dos santos, Kathleen McKeown, Bing Xiang
Unsupervised domain adaptation addresses the problem of leveraging labeled data in a source domain to learn a well-performing model in a target domain where labels are unavailable.
Cross-Lingual Document Classification Document Classification +2
no code implementations • 5 Aug 2021 • Markus Dreyer, Mengwen Liu, Feng Nan, Sandeep Atluri, Sujith Ravi
Neural models for abstractive summarization tend to generate output that is fluent and well-formed but lacks semantic faithfulness, or factuality, with respect to the input documents.
no code implementations • ICLR Workshop LLD 2019 • Ian Gemp, Ramesh Nallapati, Ran Ding, Feng Nan, Bing Xiang
We extend NTMs to the weakly semi-supervised setting by using informative priors in the training objective.
no code implementations • 3 Oct 2022 • Nihal Jain, Dejiao Zhang, Wasi Uddin Ahmad, Zijian Wang, Feng Nan, Xiaopeng Li, Ming Tan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Xiaofei Ma, Bing Xiang
Specifically, we attain $44\%$ relative improvement on the Semantic Textual Similarity tasks and $34\%$ on Code-to-Code Search tasks.
no code implementations • COLING 2022 • Fei-Tzin Lee, Miguel Ballesteros, Feng Nan, Kathleen McKeown
Large pretrained language models offer powerful generation capabilities, but cannot be reliably controlled at a sub-sentential level.