no code implementations • NAACL (TeachingNLP) 2021 • Greg Durrett, Jifan Chen, Shrey Desai, Tanya Goyal, Lucas Kabela, Yasumasa Onoe, Jiacheng Xu
We present a series of programming assignments, adaptable to a range of experience levels from advanced undergraduate to PhD, to teach students design and implementation of modern NLP systems.
1 code implementation • 14 Dec 2021 • Jiacheng Xu, Siddhartha Reddy Jonnalagadda, Greg Durrett
Conditional neural text generation models generate high-quality outputs, but often concentrate around a mode when what we really want is a diverse set of options.
1 code implementation • ACL 2022 • Ojas Ahuja, Jiacheng Xu, Akshay Gupta, Kevin Horecka, Greg Durrett
Generic summaries try to cover an entire document and query-based summaries try to answer document-specific questions.
no code implementations • Findings (ACL) 2022 • Tanya Goyal, Jiacheng Xu, Junyi Jessy Li, Greg Durrett
Across different datasets (CNN/DM, XSum, MediaSum) and summary properties, such as abstractiveness and hallucination, we study what the model learns at different stages of its fine-tuning process.
1 code implementation • Findings (ACL) 2021 • Aditya Gupta, Jiacheng Xu, Shyam Upadhyay, Diyi Yang, Manaal Faruqui
Disfluencies is an under-studied topic in NLP, even though it is ubiquitous in human conversation.
1 code implementation • ACL 2021 • Jiacheng Xu, Greg Durrett
Despite the prominence of neural abstractive summarization models, we know little about how they actually form summaries and how to understand where their decisions come from.
1 code implementation • EMNLP 2020 • Shrey Desai, Jiacheng Xu, Greg Durrett
Compressive summarization systems typically rely on a crafted set of syntactic rules to determine what spans of possible summary sentences can be deleted, then learn a model of what to actually delete by optimizing for content selection (ROUGE).
1 code implementation • EMNLP 2020 • Jiacheng Xu, Shrey Desai, Greg Durrett
An advantage of seq2seq abstractive summarization models is that they generate text in a free-form manner, but this flexibility makes it difficult to interpret model behavior.
1 code implementation • ACL 2020 • Jiacheng Xu, Zhe Gan, Yu Cheng, Jingjing Liu
Recently BERT has been adopted for document encoding in state-of-the-art text summarization models.
1 code implementation • IJCNLP 2019 • Jiacheng Xu, Greg Durrett
In this work, we present a neural model for single-document summarization based on joint extraction and syntactic compression.
1 code implementation • EMNLP 2018 • Jiacheng Xu, Greg Durrett
A hallmark of variational autoencoders (VAEs) for text processing is their combination of powerful encoder-decoder models, such as LSTMs, with simple latent distributions, typically multivariate Gaussians.
no code implementations • 25 Feb 2018 • Jinyue Su, Jiacheng Xu, Xipeng Qiu, Xuanjing Huang
Generating plausible and fluent sentence with desired properties has long been a challenge.
no code implementations • 24 Aug 2017 • Jiacheng Xu
Specifically we propose a novel approach for family-based shopping recommendation system.
no code implementations • 26 Nov 2016 • Jiacheng Xu, Kan Chen, Xipeng Qiu, Xuanjing Huang
In this paper, we propose a novel deep architecture to utilize both structural and textual information of entities.
no code implementations • EMNLP 2016 • Jiacheng Xu, Danlu Chen, Xipeng Qiu, Xuangjing Huang
Recently, neural networks have achieved great success on sentiment classification due to their ability to alleviate feature engineering.