1 code implementation • Findings (EMNLP) 2021 • Anup Anand Deshmukh, Qianqiu Zhang, Ming Li, Jimmy Lin, Lili Mou
In this paper, we address unsupervised chunking as a new task of syntactic structure induction, which is helpful for understanding the linguistic structures of human languages as well as processing low-resource languages.
no code implementations • EMNLP 2020 • Yixing Luan, Bradley Hauer, Lili Mou, Grzegorz Kondrak
It has been conjectured that multilingual information can help monolingual word sense disambiguation (WSD).
1 code implementation • 28 Oct 2024 • Yongchang Hao, Yanshuai Cao, Lili Mou
However, the model sizes are constrained by the available on-device memory during training and inference.
no code implementations • 19 Sep 2024 • Dongheng Li, Yongchang Hao, Lili Mou
Large language models have become increasingly popular and demonstrated remarkable performance in various natural language processing (NLP) tasks.
1 code implementation • 10 Jun 2024 • Yutong Han, Yan Yuan, Lili Mou
Radiology report analysis provides valuable information that can aid with public health initiatives, and has been attracting increasing attention from the research community.
1 code implementation • 29 Feb 2024 • Behzad Shayegh, Yuqiao Wen, Lili Mou
We address unsupervised discontinuous constituency parsing, where we observe a high variance in the performance of the only previous model in the literature.
no code implementations • 29 Feb 2024 • Yuqiao Wen, Behzad Shayegh, Chenyang Huang, Yanshuai Cao, Lili Mou
The ability of zero-shot translation emerges when we train a multilingual model with certain translation directions; the model can then directly translate in unseen directions.
2 code implementations • 5 Feb 2024 • Yongchang Hao, Yanshuai Cao, Lili Mou
Despite large neural networks demonstrating remarkable abilities to complete different tasks, they require excessive memory usage to store the optimization states for training.
no code implementations • 5 Feb 2024 • Yongchang Hao, Yanshuai Cao, Lili Mou
The major reason is due to the quadratic memory and cubic time complexity to compute the inverse of the matrix.
1 code implementation • 3 Oct 2023 • Behzad Shayegh, Yanshuai Cao, Xiaodan Zhu, Jackie C. K. Cheung, Lili Mou
We investigate the unsupervised constituency parsing task, which organizes words and phrases of a sentence into a hierarchical structure without using linguistically annotated data.
1 code implementation • 2 Oct 2023 • Zijun Wu, Yongkang Wu, Lili Mou
Prompt tuning in natural language processing (NLP) has become an increasingly popular method for adapting large language models to specific tasks.
no code implementations • 19 Sep 2023 • Xianggen Liu, Zhengdong Lu, Lili Mou
Deep learning has largely improved the performance of various natural language processing (NLP) tasks.
no code implementations • 18 Sep 2023 • Lili Mou
With the advances of deep learning techniques, text generation is attracting increasing interest in the artificial intelligence (AI) community, because of its wide applications and because it is an essential component of AI.
1 code implementation • 10 Sep 2023 • Zijun Wu, Anup Anand Deshmukh, Yongkang Wu, Jimmy Lin, Lili Mou
Our approach involves a two-stage training process: pretraining with an unsupervised parser and finetuning on downstream NLP tasks.
1 code implementation • 27 Jul 2023 • Yuqiao Wen, Zichao Li, Wenyu Du, Lili Mou
Experiments across four datasets show that our methods outperform existing KD approaches, and that our symmetric distilling losses can better force the student to learn from the teacher distribution.
1 code implementation • 27 Jan 2023 • Guoqing Luo, Yu Tong Han, Lili Mou, Mauajama Firdaus
In this paper, we present a prompt-based editing approach for text style transfer.
1 code implementation • 17 Oct 2022 • Yongchang Hao, Yuxin Liu, Lili Mou
We additionally propose a simple modification to stabilize the RL training on non-parallel datasets with our induced reward function.
2 code implementations • 29 Sep 2022 • Yuqiao Wen, Yongchang Hao, Yanshuai Cao, Lili Mou
Open-domain dialogue systems aim to interact with humans through natural language texts in an open-ended fashion.
1 code implementation • 10 Aug 2022 • Lucas N. Ferreira, Lili Mou, Jim Whitehead, Levi H. S. Lelis
We use Monte Carlo Tree Search as a decoding mechanism to steer the probability distribution learned by a language model towards a given emotion.
1 code implementation • ACL 2022 • Puyuan Liu, Chenyang Huang, Lili Mou
Text summarization aims to generate a short summary for an input text.
1 code implementation • 28 May 2022 • Puyuan Liu, Xiang Zhang, Lili Mou
Sentence summarization aims at compressing a long sentence into a short one that keeps the main gist, and has extensive real-world applications such as headline generation.
1 code implementation • NAACL 2022 • Wang Xu, Kehai Chen, Lili Mou, Tiejun Zhao
Document-level relation extraction (DocRE) aims to determine the relation between two entities from a document of multiple sentences.
Ranked #5 on Dialog Relation Extraction on DialogRE (F1c (v1) metric)
Dialog Relation Extraction Document-level Relation Extraction +2
1 code implementation • 9 Feb 2022 • Siguang Huang, Yunli Wang, Lili Mou, Huayue Zhang, Han Zhu, Chuan Yu, Bo Zheng
In previous work, researchers have developed several calibration methods to post-process the outputs of a predictor to obtain calibrated values, such as binning and scaling methods.
1 code implementation • LREC 2022 • Yuqiao Wen, Guoqing Luo, Lili Mou
Open-domain dialogue systems aim to converse with humans through text, and dialogue research has heavily relied on benchmark datasets.
1 code implementation • 6 Dec 2021 • Shailza Jolly, Zi Xuan Zhang, Andreas Dengel, Lili Mou
To this end, we propose a search-and-learning approach that leverages pretrained language models but inserts the missing slots to improve the semantic coverage.
2 code implementations • 14 Oct 2021 • Chenyang Huang, Hao Zhou, Osmar R. Zaïane, Lili Mou, Lei LI
How do we perform efficient inference while retaining high translation quality?
no code implementations • 1 Oct 2021 • Xianggen Liu, Pengyong Li, Fandong Meng, Hao Zhou, Huasong Zhong, Jie zhou, Lili Mou, Sen Song
The key idea is to integrate powerful neural networks into metaheuristics (e. g., simulated annealing, SA) to restrict the search space in discrete optimization.
no code implementations • 22 Sep 2021 • Chengzhang Dong, Chenyang Huang, Osmar Zaïane, Lili Mou
Explicitly modeling emotions in dialogue generation has important applications, such as building empathetic personal companions.
1 code implementation • 18 Sep 2021 • Zijun Wu, Zi Xuan Zhang, Atharva Naik, Zhijian Mei, Mauajama Firdaus, Lili Mou
In this work, we address the explainability of NLI by weakly supervised logical reasoning, and propose an Explainable Phrasal Reasoning (EPR) approach.
no code implementations • RANLP 2021 • Bradley Hauer, Grzegorz Kondrak, Yixing Luan, Arnob Mallik, Lili Mou
Our two unsupervised methods refine sense annotations produced by a knowledge-based WSD system via lexical translations in a parallel corpus.
no code implementations • ACL (spnlp) 2021 • Chenyang Huang, Wei Yang, Yanshuai Cao, Osmar Zaïane, Lili Mou
In this paper, we propose a globally normalized model for context-free grammar (CFG)-based semantic parsing.
no code implementations • NAACL 2021 • Chenyang Huang, Amine Trabelsi, Xuebin Qin, Nawshad Farruque, Lili Mou, Osmar Za{\"\i}ane
Multi-label emotion classification is an important task in NLP and is essential to many applications.
1 code implementation • 23 Feb 2021 • Zeyu Sun, Wenjie Zhang, Lili Mou, Qihao Zhu, Yingfei Xiong, Lu Zhang
Existing graph neural networks (GNNs) largely rely on node embeddings, which represent a node as a vector by its identity, type, or content.
1 code implementation • COLING 2020 • Yunli Wang, Yu Wu, Lili Mou, Zhoujun Li, WenHan Chao
Conventional approaches for formality style transfer borrow models from neural machine translation, which typically requires massive parallel data for training.
no code implementations • NeurIPS 2020 • Jingjing Li, Zichao Li, Lili Mou, Xin Jiang, Michael R. Lyu, Irwin King
In this work, we present TGLS, a novel framework to unsupervised Text Generation by Learning from Search.
no code implementations • ACL 2020 • Lili Mou, Olga Vechtomova
We start from the definition of style and different settings of stylized text generation, illustrated with various applications.
1 code implementation • ACL 2020 • Dhruv Kumar, Lili Mou, Lukasz Golab, Olga Vechtomova
We present a novel iterative, edit-based approach to unsupervised sentence simplification.
Ranked #5 on Text Simplification on Newsela
2 code implementations • ACL 2020 • Raphael Schumann, Lili Mou, Yao Lu, Olga Vechtomova, Katja Markert
Automatic sentence summarization produces a shorter version of a sentence, while preserving its most important information.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Pourya Vakilipourtakalou, Lili Mou
Recurrent neural networks (RNNs) are non-linear dynamic systems.
2 code implementations • 22 Nov 2019 • Zeyu Sun, Qihao Zhu, Yingfei Xiong, Yican Sun, Lili Mou, Lu Zhang
TreeGen outperformed the previous state-of-the-art approach by 4. 5 percentage points on HearthStone, and achieved the best accuracy among neural network-based approaches on ATIS (89. 1%) and GEO (89. 6%).
1 code implementation • COLING 2020 • Kashif Khan, Gaurav Sahu, Vikash Balasubramanian, Lili Mou, Olga Vechtomova
Generating relevant responses in a dialog is challenging, and requires not only proper modeling of context in the conversation but also being able to generate fluent sentences during inference.
no code implementations • 10 Nov 2019 • Amirpasha Ghabussi, Lili Mou, Olga Vechtomova
Moreover, we can train our model on relatively small datasets and learn the latent representation of a specified class by adding external data with other styles/classes to our dataset.
no code implementations • IJCNLP 2019 • Yunli Wang, Yu Wu, Lili Mou, Zhoujun Li, WenHan Chao
Formality text style transfer plays an important role in various NLP applications, such as non-native speaker assistants and child education.
no code implementations • ACL 2020 • Xianggen Liu, Lili Mou, Fandong Meng, Hao Zhou, Jie zhou, Sen Song
Unsupervised paraphrase generation is a promising and important research topic in natural language processing.
1 code implementation • ACL 2019 • Yu Bao, Hao Zhou, Shu-Jian Huang, Lei LI, Lili Mou, Olga Vechtomova, Xin-yu Dai, Jia-Jun Chen
In this paper, we propose to generate sentences from disentangled syntactic and semantic spaces.
1 code implementation • ACL 2019 • Bowen Li, Lili Mou, Frank Keller
In our work, we propose an imitation learning approach to unsupervised parsing, where we transfer the syntactic knowledge induced by the PRPN to a Tree-LSTM model with discrete parsing actions.
4 code implementations • 28 Mar 2019 • Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, Jimmy Lin
In the natural language processing literature, neural networks are becoming increasingly deeper and complex.
Ranked #60 on Sentiment Analysis on SST-2 Binary classification
1 code implementation • 14 Nov 2018 • Ning Miao, Hao Zhou, Lili Mou, Rui Yan, Lei LI
In real-world applications of natural language generation, there are often constraints on the target sentences in addition to fluency and naturalness requirements.
1 code implementation • 14 Nov 2018 • Zeyu Sun, Qihao Zhu, Lili Mou, Yingfei Xiong, Ge Li, Lu Zhang
In this paper, we propose a grammar-based structural convolutional neural network (CNN) for code generation.
1 code implementation • ICLR 2020 • Nabiha Asghar, Lili Mou, Kira A. Selby, Kevin D. Pantasdo, Pascal Poupart, Xin Jiang
The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the number of parameters, and thus the model capacity.
3 code implementations • ACL 2019 • Vineet John, Lili Mou, Hareesh Bahuleyan, Olga Vechtomova
This paper tackles the problem of disentangling the latent variables of style and content in language models.
no code implementations • 6 Jul 2018 • Xianggen Liu, Lili Mou, Haotian Cui, Zhengdong Lu, Sen Song
Both the classification result and when to make the classification are part of the decision process, which is controlled by a policy network and trained with reinforcement learning.
1 code implementation • NAACL 2019 • Hareesh Bahuleyan, Lili Mou, Hao Zhou, Olga Vechtomova
The variational autoencoder (VAE) imposes a probabilistic distribution (typically Gaussian) on the latent space and penalizes the Kullback--Leibler (KL) divergence between the posterior and prior.
2 code implementations • COLING 2018 • Hareesh Bahuleyan, Lili Mou, Olga Vechtomova, Pascal Poupart
The variational encoder-decoder (VED) encodes source information as a set of random variables using a neural network, which in turn is decoded into target data using another neural network.
no code implementations • 6 Dec 2017 • Bolin Wei, Shuai Lu, Lili Mou, Hao Zhou, Pascal Poupart, Ge Li, Zhi Jin
This paper addresses the question: Why do neural dialog systems generate short and meaningless replies?
1 code implementation • TACL 2018 • Zaixiang Zheng, Hao Zhou, Shu-Jian Huang, Lili Mou, Xin-yu Dai, Jia-Jun Chen, Zhaopeng Tu
The Past and Future contents are fed to both the attention model and the decoder states, which offers NMT systems the knowledge of translated and untranslated contents.
no code implementations • 12 Sep 2017 • Nabiha Asghar, Pascal Poupart, Jesse Hoey, Xin Jiang, Lili Mou
Existing neural conversational models process natural language primarily on a lexico-syntactic level, thereby ignoring one of the most crucial components of human-to-human dialogue: its affective content.
1 code implementation • 1 Sep 2017 • Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, Zhifang Sui
Generating texts from structured data (e. g., a table) is important for various natural language processing tasks such as question answering and dialog systems.
no code implementations • LREC 2018 • Zhao Meng, Lili Mou, Zhi Jin
Neural network-based dialog systems are attracting increasing attention in both academia and industry.
no code implementations • ACL 2017 • Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, Dongyan Zhao
Generative conversational systems are attracting increasing attention in natural language processing (NLP).
1 code implementation • 22 Mar 2017 • Zhao Meng, Lili Mou, Zhi Jin
Speaker change detection (SCD) is an important task in dialog modeling.
1 code implementation • 11 Jan 2017 • Chongyang Tao, Lili Mou, Dongyan Zhao, Rui Yan
Open-domain human-computer conversation has been attracting increasing attention over the past few years.
no code implementations • ICML 2017 • Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin
Building neural networks to query a knowledge base (a table) with natural language is an emerging research topic in deep learning.
no code implementations • 13 Oct 2016 • Yiping Song, Lili Mou, Rui Yan, Li Yi, Zinan Zhu, Xiaohua Hu, Ming Zhang
In human-computer conversation systems, the context of a user-issued utterance is particularly important because it provides useful background information of the conversation.
1 code implementation • ACL 2016 • Yunchuan Chen, Lili Mou, Yan Xu, Ge Li, Zhi Jin
Such approaches are time- and memory-intensive because of the large numbers of parameters for word embeddings and the output layer.
no code implementations • COLING 2016 • Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, Zhi Jin
Using neural networks to generate replies in human-computer dialogue systems is attracting increasing attention over the past few years.
no code implementations • 15 Apr 2016 • Xiang Li, Lili Mou, Rui Yan, Ming Zhang
In this paper, we propose StalemateBreaker, a conversation system that can proactively introduce new content when appropriate.
no code implementations • EMNLP 2016 • Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, Zhi Jin
Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain.
no code implementations • COLING 2016 • Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin
However, existing neural networks for relation classification are usually of shallow architectures (e. g., one-layer convolutional neural networks or recurrent networks).
Ranked #2 on Relation Classification on SemEval 2010 Task 8
no code implementations • ACL 2016 • Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, Zhi Jin
In this paper, we propose the TBCNN-pair model to recognize entailment and contradiction between two sentences.
Ranked #85 on Natural Language Inference on SNLI
no code implementations • 21 Dec 2015 • Lili Mou, Rui Yan, Ge Li, Lu Zhang, Zhi Jin
Provided a specific word, we use RNNs to generate previous words and future words, either simultaneously or asynchronously, resulting in two model variants.
no code implementations • 25 Oct 2015 • Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin
This paper envisions an end-to-end program generation scenario using recurrent neural networks (RNNs): Users can express their intention in natural language; an RNN then automatically generates corresponding code in a characterby-by-character fashion.
no code implementations • EMNLP 2015 • Hao Peng, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin
This paper aims to compare different regularization strategies to address a common phenomenon, severe overfitting, in embedding-based neural networks for NLP.
no code implementations • 15 Aug 2015 • Xu Yan, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, Zhi Jin
Relation classification is an important research arena in the field of natural language processing (NLP).
Ranked #4 on Relation Classification on SemEval 2010 Task 8
no code implementations • 15 Jun 2015 • Lili Mou, Ran Jia, Yan Xu, Ge Li, Lu Zhang, Zhi Jin
Distilling knowledge from a well-trained cumbersome network to a small one has recently become a new research topic, as lightweight neural networks with high performance are particularly in need in various resource-restricted systems.
no code implementations • EMNLP 2015 • Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, Zhi Jin
This paper proposes a tree-based convolutional neural network (TBCNN) for discriminative sentence modeling.
Ranked #7 on Text Classification on TREC-6
8 code implementations • 18 Sep 2014 • Lili Mou, Ge Li, Lu Zhang, Tao Wang, Zhi Jin
Programming language processing (similar to natural language processing) is a hot research topic in the field of software engineering; it has also aroused growing interest in the artificial intelligence community.
1 code implementation • 11 Sep 2014 • Lili Mou, Ge Li, Yuxuan Liu, Hao Peng, Zhi Jin, Yan Xu, Lu Zhang
In this pioneering paper, we propose the "coding criterion" to build program vector representations, which are the premise of deep learning for program analysis.