no code implementations • AACL (iwdp) 2020 • Sujian Li
Discourse parsing aims to comprehensively acquire the logical structure of the whole text which may be helpful to some downstream applications such as summarization, reading comprehension, QA and so on.
1 code implementation • ACL (IWPT) 2021 • Tianyi Li, Sujian Li, Mark Steedman
Strong and affordable in-domain data is a desirable asset when transferring trained semantic parsers to novel domains.
no code implementations • Findings (EMNLP) 2021 • Simin Rao, Hua Zheng, Sujian Li
Specifically, we focus on adversarial training and cross-lingual pre-training method to transfer the LR knowledge learned from annotated data in the resource-rich English language to Chinese.
no code implementations • Findings (ACL) 2022 • Yu Xia, Quan Wang, Yajuan Lyu, Yong Zhu, Wenhao Wu, Sujian Li, Dai Dai
However, the existing method depends on the relevance between tasks and is prone to inter-type confusion. In this paper, we propose a novel two-stage framework Learn-and-Review (L&R) for continual NER under the type-incremental setting to alleviate the above issues. Specifically, for the learning stage, we distill the old knowledge from teacher to a student on the current dataset.
Continual Named Entity Recognition named-entity-recognition +2
no code implementations • CCL 2020 • Wenyu Guan, Qianying Liu, Tianyi Li, Sujian Li
To solve this problem, we propose a two-step approach which first selects and orders the important data records and then generates text from the noise-reduced data.
no code implementations • CCL 2021 • Simin Rao, Hua Zheng, Sujian Li
“阅读分级的概念在二十世纪早期就被教育工作者提出, 随着人们对阅读变得越来越重视, 阅读分级引起了越来越多的关注, 自动阅读分级技术也得到了一定程度的发展。本文总结了近年来的阅读分级领域的研究进展, 首先介绍了阅读分级现有的标准和随之而产生的各种体系和语料资源。在此基础之上整理了在自动阅读分级工作已经广泛应用的三类方法:公式法、传统的机器学习方法和最近热门的深度学习方法, 并结合实验结果梳理了三类方法存在的弊利, 以及可以改进的方向。最后本文还对阅读分级的未来发展方向以及可以应用的领域进行了总结和展望。”
no code implementations • COLING 2022 • Yu Xia, Wenbin Jiang, Yajuan Lyu, Sujian Li
Existing works are based on end-to-end neural models which do not explicitly model the intermediate states and lack interpretability for the parsing process.
no code implementations • 7 May 2024 • Wenhao Wu, Yizhong Wang, Yao Fu, Xiang Yue, Dawei Zhu, Sujian Li
Effectively handling instructions with extremely long context remains a challenge for Large Language Models (LLMs), typically necessitating high-quality long data and substantial computational resources.
1 code implementation • 18 Apr 2024 • Dawei Zhu, Liang Wang, Nan Yang, YiFan Song, Wenhao Wu, Furu Wei, Sujian Li
This paper explores context window extension of existing embedding models, pushing the limit to 32k without requiring additional training.
1 code implementation • 31 Mar 2024 • Dawei Zhu, Wenhao Wu, YiFan Song, Fangwei Zhu, Ziqiang Cao, Sujian Li
Due to the scarcity of annotated data, data augmentation is commonly used for training coherence evaluation models.
1 code implementation • 4 Mar 2024 • YiFan Song, Da Yin, Xiang Yue, Jie Huang, Sujian Li, Bill Yuchen Lin
This iterative cycle of exploration and training fosters continued improvement in the agents.
no code implementations • 28 Feb 2024 • Jiebin Zhang, Eugene J. Yu, Qinyu Chen, Chenhao Xiong, Dawei Zhu, Han Qian, Mingbo Song, Xiaoguang Li, Qun Liu, Sujian Li
In today's fast-paced world, the growing demand to quickly generate comprehensive and accurate Wikipedia documents for emerging events is both crucial and challenging.
no code implementations • 4 Feb 2024 • Haowei Lin, Baizhou Huang, Haotian Ye, Qinyu Chen, ZiHao Wang, Sujian Li, Jianzhu Ma, Xiaojun Wan, James Zou, Yitao Liang
The ever-growing ecosystem of LLMs has posed a challenge in selecting the most appropriate pre-trained model to fine-tune amidst a sea of options.
1 code implementation • 20 Nov 2023 • Lei Geng, Xu Yan, Ziqiang Cao, Juntao Li, Wenjie Li, Sujian Li, Xinjie Zhou, Yang Yang, Jun Zhang
We achieve a biomedical multilingual corpus by incorporating three granularity knowledge alignments (entity, fact, and passage levels) into monolingual corpora.
1 code implementation • 10 Oct 2023 • YiFan Song, Peiyi Wang, Weimin Xiong, Dawei Zhu, Tianyu Liu, Zhifang Sui, Sujian Li
Continual learning (CL) aims to constantly learn new knowledge over time while avoiding catastrophic forgetting on old tasks.
1 code implementation • 10 Oct 2023 • Weimin Xiong, YiFan Song, Peiyi Wang, Sujian Li
Continual relation extraction (CRE) aims to solve the problem of catastrophic forgetting when learning a sequence of newly emerging relations.
1 code implementation • 19 Sep 2023 • Dawei Zhu, Nan Yang, Liang Wang, YiFan Song, Wenhao Wu, Furu Wei, Sujian Li
To decouple train length from target length for efficient context window extension, we propose Positional Skip-wisE (PoSE) training that smartly simulates long inputs using a fixed context window.
no code implementations • 11 Jun 2023 • YiFan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
To address the practical challenges of tackling complex instructions, we propose RestGPT, which exploits the power of LLMs and conducts a coarse-to-fine online planning mechanism to enhance the abilities of task decomposition and API selection.
1 code implementation • 7 Jun 2023 • Shudi Hou, Yu Xia, Muhao Chen, Sujian Li
Traditional text classification typically categorizes texts into pre-defined coarse-grained classes, from which the produced models cannot handle the real-world scenario where finer categories emerge periodically for accurate services.
no code implementations • 12 May 2023 • YiFan Song, Peiyi Wang, Dawei Zhu, Tianyu Liu, Zhifang Sui, Sujian Li
Continual learning (CL) aims to constantly learn new knowledge over time while avoiding catastrophic forgetting on old tasks.
1 code implementation • 20 Mar 2023 • Hongbo Wang, Weimin Xiong, YiFan Song, Dawei Zhu, Yu Xia, Sujian Li
Joint entity and relation extraction (JERE) is one of the most important tasks in information extraction.
1 code implementation • 24 Feb 2023 • Shichao Sun, Ruifeng Yuan, Wenjie Li, Sujian Li
Unsupervised extractive summarization aims to extract salient sentences from a document as the summary without labeled data.
1 code implementation • 20 Dec 2022 • Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Sujian Li, Yajuan Lv
As a result, they perform poorly on the real generated text and are biased heavily by their single-source upstream tasks.
no code implementations • 16 Nov 2022 • Yunji Li, Sujian Li, Xing Shi
In this paper, we propose the task of consecutive question generation (CQG), which generates a set of logically related question-answer pairs to understand a whole passage, with a comprehensive consideration of the aspects including accuracy, coverage, and informativeness.
no code implementations • 1 Nov 2022 • Wenhao Wu, Wei Li, Jiachen Liu, Xinyan Xiao, Ziqiang Cao, Sujian Li, Hua Wu
We first measure a model's factual robustness by its success rate to defend against adversarial attacks when generating factual information.
no code implementations • 22 Oct 2022 • Wenhao Wu, Wei Li, Jiachen Liu, Xinyan Xiao, Sujian Li, Yajuan Lyu
Though model robustness has been extensively studied in language understanding, the robustness of Seq2Seq generation remains understudied.
2 code implementations • 18 Oct 2022 • Xiangyang Li, Bo Chen, Huifeng Guo, Jingjie Li, Chenxu Zhu, Xiang Long, Sujian Li, Yichao Wang, Wei Guo, Longxia Mao, JinXing Liu, Zhenhua Dong, Ruiming Tang
FE-Block module performs fine-grained and early feature interactions to capture the interactive signals between user and item towers explicitly and CIR module leverages a contrastive interaction regularization to further enhance the interactions implicitly.
1 code implementation • 10 Oct 2022 • Peiyi Wang, YiFan Song, Tianyu Liu, Binghuai Lin, Yunbo Cao, Sujian Li, Zhifang Sui
In this paper, through empirical studies we argue that this assumption may not hold, and an important reason for catastrophic forgetting is that the learned representations do not have good robustness against the appearance of analogous relations in the subsequent learning process.
1 code implementation • COLING 2022 • Dawei Zhu, Qiusi Zhan, Zhejian Zhou, YiFan Song, Jiebin Zhang, Sujian Li
Different from previous token-level or sentence-level counterparts, ConFiguRe aims at extracting a figurative unit from discourse-level context, and classifying the figurative unit into the right figure type.
no code implementations • 24 Aug 2022 • Qi Lv, Ziqiang Cao, Wenrui Xie, Derui Wang, Jingwen Wang, Zhiwei Hu, Tangkun Zhang, Ba Yuan, Yuanhang Li, Min Cao, Wenjie Li, Sujian Li, Guohong Fu
Furthermore, based on the similarity between video outlines and textual outlines, we use a large number of articles with chapter headings to pretrain our model.
no code implementations • 22 Aug 2022 • Xu Yan, Chunhui Ai, Ziqiang Cao, Min Cao, Sujian Li, Wenjie Li, Guohong Fu
While the builders of existing image-text retrieval datasets strive to ensure that the caption matches the linked image, they cannot prevent a caption from fitting other images.
no code implementations • NAACL 2022 • Xiangyang Li, Xiang Long, Yu Xia, Sujian Li
Text style transfer (TST) without parallel data has achieved some practical success.
no code implementations • ACL 2021 • Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Ziqiang Cao, Sujian Li, Hua Wu, Haifeng Wang
Abstractive summarization for long-document or multi-document remains challenging for the Seq2Seq architecture, as Seq2Seq is not good at analyzing long-distance relations in text.
no code implementations • ACL 2021 • Yi Cheng, SiYao Li, Bang Liu, Ruihui Zhao, Sujian Li, Chenghua Lin, Yefeng Zheng
This paper explores the task of Difficulty-Controllable Question Generation (DCQG), which aims at generating questions with required difficulty levels.
no code implementations • ACL 2022 • Qingxiu Dong, Ziwei Qin, Heming Xia, Tian Feng, Shoujie Tong, Haoran Meng, Lin Xu, Weidong Zhan, Sujian Li, Zhongyu Wei, Tianyu Liu, Zuifang Sui
It is a common practice for recent works in vision language cross-modal reasoning to adopt a binary or multi-choice classification formulation taking as input a set of source image(s) and textual query.
no code implementations • 25 Apr 2021 • Wenhao Wu, Sujian Li
For a researcher, writing a good research statement is crucial but costs a lot of time and effort.
no code implementations • 17 Feb 2021 • Lianzhe Huang, Peiyi Wang, Sujian Li, Tianyu Liu, Xiaodong Zhang, Zhicong Cheng, Dawei Yin, Houfeng Wang
Aspect Sentiment Triplet Extraction (ASTE) aims to extract triplets from a sentence, including target entities, associated sentiment polarities, and opinion spans which rationalize the polarities.
Ranked #8 on Aspect Sentiment Triplet Extraction on ASTE-Data-V2
1 code implementation • 7 Jan 2021 • Xiangyang Li, Yu Xia, Xiang Long, Zheng Li, Sujian Li
In this paper, we describe our system for the AAAI 2021 shared task of COVID-19 Fake News Detection in English, where we achieved the 3rd position with the weighted F1 score of 0. 9859 on the test set.
Ranked #1 on Fake News Detection on Grover-Mega
1 code implementation • CCL 2021 • Yi Cheng, Sujian Li, Yueyuan Li
For text-level discourse analysis, there are various discourse schemes but relatively few labeled data, because discourse research is still immature and it is labor-intensive to annotate the inner logic of a text.
no code implementations • COLING 2020 • Lianzhe Huang, Xin Sun, Sujian Li, Linhao Zhang, Houfeng Wang
In this paper, we exploit syntactic awareness to the model by the graph attention network on the dependency tree structure and external pre-training knowledge by BERT language model, which helps to model the interaction between the context and aspect words better.
1 code implementation • 4 Oct 2020 • Qianying Liu, Wenyu Guan, Sujian Li, Fei Cheng, Daisuke Kawahara, Sadao Kurohashi
Automatically solving math word problems is a critical task in the field of natural language processing.
2 code implementations • CCL 2020 • Qianying Liu, Sicong Jiang, Yizhong Wang, Sujian Li
In this paper, we introduce LiveQA, a new question answering dataset constructed from play-by-play live broadcast.
no code implementations • ACL 2020 • Zhenwen Li, Wenhao Wu, Sujian Li
In this paper, we argue that elementary discourse unit (EDU) is a more appropriate textual unit of content selection than the sentence unit in abstractive summarization.
no code implementations • LREC 2020 • Sennan Liu, Shuang Zeng, Sujian Li
In this paper, to evaluate text coherence, we propose the paragraph ordering task as well as conducting sentence ordering.
no code implementations • WS 2019 • Yi Cheng, Sujian Li
Due to the absence of labeled data, discourse parsing still remains challenging in some languages.
no code implementations • WS 2019 • Tianyi Li, Sujian Li
Previous work on visual storytelling mainly focused on exploring image sequence as evidence for storytelling and neglected textual evidence for guiding story generation.
no code implementations • IJCNLP 2019 • Qianying Liu, Wenyv Guan, Sujian Li, Daisuke Kawahara
To address this problem, we propose a tree-structured decoding method that generates the abstract syntax tree of the equation in a top-down manner.
2 code implementations • IJCNLP 2019 • Lianzhe Huang, Dehong Ma, Sujian Li, Xiaodong Zhang, Houfeng Wang
Recently, researches have explored the graph neural network (GNN) techniques on text classification, since GNN does well in handling complex structures and preserving global information.
Ranked #2 on Text Classification on Ohsumed
1 code implementation • IJCNLP 2019 • Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, Matt Gardner
The ability to understand and work with numbers (numeracy) is critical for many complex reasoning tasks.
1 code implementation • 10 Sep 2019 • Bowen Yu, Zhen-Yu Zhang, Xiaobo Shu, Yubin Wang, Tingwen Liu, Bin Wang, Sujian Li
Joint extraction of entities and relations aims to detect entity pairs along with their relations using a single model.
Ranked #1 on Relation Extraction on NYT-single
no code implementations • 23 Aug 2019 • Zhepei Wei, Yantao Jia, Yuan Tian, Mohammad Javad Hosseini, Sujian Li, Mark Steedman, Yi Chang
In this work, we first introduce the hierarchical dependency and horizontal commonality between the two levels, and then propose an entity-enhanced dual tagging framework that enables the triple extraction (TE) task to utilize such interactions with self-learned entity features through an auxiliary entity extraction (EE) task, without breaking the joint decoding of relational triples.
no code implementations • IJCNLP 2019 • Liang Wang, Wei Zhao, Ruoyu Jia, Sujian Li, Jingming Liu
This paper presents a new sequence-to-sequence (seq2seq) pre-training method PoDA (Pre-training of Denoising Autoencoders), which learns representations suitable for text generation tasks.
1 code implementation • IJCAI-19 2019 • Bowen Yu, Zhen-Yu Zhang, Tingwen Liu, Bin Wang, Sujian Li, Quangang Li
Relation extraction studies the issue of predicting semantic relations between pairs of entities in sentences.
Ranked #30 on Relation Extraction on TACRED
1 code implementation • ACL 2019 • An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, Sujian Li
In this work, we investigate the potential of leveraging external knowledge bases (KBs) to further improve BERT for MRC.
no code implementations • ACL 2019 • Dehong Ma, Sujian Li, Fangzhao Wu, Xing Xie, Houfeng Wang
Aspect term extraction (ATE) aims at identifying all aspect terms in a sentence and is usually modeled as a sequence labeling problem.
Ranked #1 on Term Extraction on SemEval 2014 Task 4 Laptop
no code implementations • 30 Jun 2019 • Xin Zhang, An Yang, Sujian Li, Yizhong Wang
Machine reading comprehension aims to teach machines to understand a text like a human and is a new challenging direction in Artificial Intelligence.
no code implementations • ALTA 2019 • Wenyv Guan, Qianying Liu, Guangzhi Han, Bin Wang, Sujian Li
The methods first generate a rough sketch in the coarse stage and then use the sketch to get the final result in the fine stage.
no code implementations • EMNLP 2018 • Chen Shi, Qi Chen, Lei Sha, Sujian Li, Xu Sun, Houfeng Wang, Lintao Zhang
The lack of labeled data is one of the main challenges when building a task-oriented dialogue system.
no code implementations • EMNLP 2018 • Dehong Ma, Sujian Li, Houfeng Wang
Targeted sentiment analysis (TSA) aims at extracting targets and classifying their sentiment classes.
no code implementations • 30 Aug 2018 • Jingfeng Yang, Sujian Li
Discourse segmentation aims to segment Elementary Discourse Units (EDUs) and is a fundamental task in discourse analysis.
1 code implementation • EMNLP 2018 • Yizhong Wang, Sujian Li, Jingfeng Yang
Discourse segmentation, which segments texts into Elementary Discourse Units, is a fundamental step in discourse analysis.
no code implementations • COLING 2018 • Liang Wang, Sujian Li, Wei Zhao, Kewei Shen, Meng Sun, Ruoyu Jia, Jingming Liu
Cloze-style reading comprehension has been a popular task for measuring the progress of natural language understanding in recent years.
no code implementations • 4 Aug 2018 • Niantao Xie, Sujian Li, Huiling Ren, Qibin Zhai
Experiments on the CNN/Daily Mail dataset show that our models achieve competitive performance with the state-of-the-art ROUGE scores.
no code implementations • ACL 2018 • Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei
Most previous seq2seq summarization systems purely depend on the source text to generate summaries, which tends to work unstably.
Ranked #23 on Text Summarization on GigaWord
1 code implementation • ACL 2018 • An Yang, Sujian Li
Annotation corpus for discourse relations benefits NLP tasks such as machine translation and question answering.
no code implementations • WS 2018 • An Yang, Kai Liu, Jing Liu, Yajuan Lyu, Sujian Li
Current evaluation metrics to question answering based machine reading comprehension (MRC) systems generally focus on the lexical overlap between the candidate and reference answers, such as ROUGE and BLEU.
no code implementations • ACL 2018 • Yizhong Wang, Kai Liu, Jing Liu, wei he, Yajuan Lyu, Hua Wu, Sujian Li, Haifeng Wang
Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine.
Ranked #3 on Question Answering on MS MARCO
1 code implementation • NAACL 2018 • Shuming Ma, Xu sun, Wei Li, Sujian Li, Wenjie Li, Xuancheng Ren
The existing sequence-to-sequence model tends to memorize the words and the patterns in the training dataset instead of learning the meaning of the words.
no code implementations • IJCNLP 2017 • Yizhong Wang, Sujian Li, Jingfeng Yang, Xu sun, Houfeng Wang
Identifying implicit discourse relations between text spans is a challenging task because it requires understanding the meaning of the text.
General Classification Implicit Discourse Relation Classification +3
no code implementations • 13 Nov 2017 • Ziqiang Cao, Furu Wei, Wenjie Li, Sujian Li
While previous abstractive summarization approaches usually focus on the improvement of informativeness, we argue that faithfulness is also a vital prerequisite for a practical abstractive summarization system.
Ranked #22 on Text Summarization on GigaWord
no code implementations • 4 Nov 2017 • Jingjing Xu, Xu sun, Sujian Li, Xiaoyan Cai, Bingzhen Wei
In this paper, we propose a deep stacking framework to improve the performance on word segmentation tasks with insufficient data by integrating datasets from diverse domains.
no code implementations • IJCNLP 2017 • Dehong Ma, Sujian Li, Xiaodong Zhang, Houfeng Wang, Xu sun
Document-level sentiment classification aims to assign the user reviews a sentiment polarity.
Ranked #5 on Sentiment Analysis on User and product information
5 code implementations • 4 Sep 2017 • Dehong Ma, Sujian Li, Xiaodong Zhang, Houfeng Wang
In this paper, we argue that both targets and contexts deserve special treatment and need to be learned their own representations via interactive learning.
no code implementations • EMNLP 2017 • Liang Wang, Sujian Li, Yajuan Lv, Houfeng Wang
Topic segmentation plays an important role for discourse parsing and information retrieval.
1 code implementation • 1 Sep 2017 • Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, Zhifang Sui
Generating texts from structured data (e. g., a table) is important for various natural language processing tasks such as question answering and dialog systems.
no code implementations • SEMEVAL 2017 • Liang Wang, Sujian Li
This paper presents a system that participated in SemEval 2017 Task 10 (subtask A and subtask B): Extracting Keyphrases and Relations from Scientific Publications (Augenstein et al., 2017).
1 code implementation • ACL 2017 • Yizhong Wang, Sujian Li, Houfeng Wang
Previous work introduced transition-based algorithms to form a unified architecture of parsing rhetorical structures (including span, nuclearity and relation), but did not achieve satisfactory performance.
Ranked #5 on Discourse Parsing on RST-DT
no code implementations • COLING 2016 • Tingsong Jiang, Tianyu Liu, Tao Ge, Lei Sha, Baobao Chang, Sujian Li, Zhifang Sui
In this paper, we present a novel time-aware knowledge graph completion model that is able to predict links in a KG using both the existing facts and the temporal information of the facts.
no code implementations • COLING 2016 • Lei Sha, Baobao Chang, Zhifang Sui, Sujian Li
After read the premise again, the model can get a better understanding of the premise, which can also affect the understanding of the hypothesis.
Ranked #42 on Natural Language Inference on SNLI
no code implementations • 28 Nov 2016 • Ziqiang Cao, Chuwei Luo, Wenjie Li, Sujian Li
In this paper, we develop a novel Seq2Seq model to fuse a copying decoder and a restricted generative decoder.
no code implementations • 28 Nov 2016 • Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei
Developed so far, multi-document summarization has reached its bottleneck due to the lack of sufficient training data and diverse categories of documents.
no code implementations • EMNLP 2016 • Yang Liu, Sujian Li
Recognizing implicit discourse relations is a challenging but important task in the field of Natural Language Processing.
no code implementations • COLING 2016 • Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei, Yan-ran Li
Query relevance ranking and sentence saliency ranking are the two main tasks in extractive query-focused summarization.
no code implementations • 9 Mar 2016 • Yang Liu, Sujian Li, Xiaodong Zhang, Zhifang Sui
Without discourse connectives, classifying implicit discourse relations is a challenging task and a bottleneck for building a practical discourse parser.
no code implementations • NAACL 2016 • Lei Sha, Sujian Li, Baobao Chang, Zhifang Sui
Automatic event schema induction (AESI) means to extract meta-event from raw text, in other words, to find out what types (templates) of event may exist in the raw text and what roles (slots) may exist in each event type.
no code implementations • 26 Nov 2015 • Ziqiang Cao, Chengyao Chen, Wenjie Li, Sujian Li, Furu Wei, Ming Zhou
Both informativeness and readability of the collected summaries are verified by manual judgment.
no code implementations • EMNLP 2015 • Yan-ran Li, Wenjie Li, Fei Sun, Sujian Li
Distributed word representations are very useful for capturing semantic information and have been successfully applied in a variety of NLP tasks, especially on English.
no code implementations • IJCNLP 2015 • Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, Houfeng Wang
Previous research on relation classification has verified the effectiveness of using dependency shortest paths or subtrees.
Ranked #5 on Relation Classification on SemEval 2010 Task 8
no code implementations • 8 Jul 2015 • Xiaojun Wan, Ziqiang Cao, Furu Wei, Sujian Li, Ming Zhou
However, according to our quantitative analysis, none of the existing summarization models can always produce high-quality summaries for different document sets, and even a summarization model with good overall performance may produce low-quality summaries for some document sets.
no code implementations • TACL 2013 • Jiwei Li, Sujian Li
Both supervised learning methods and LDA based topic model have been successfully applied in the field of query focused multi-document summarization.
no code implementations • 10 Dec 2012 • Jiwei Li, Sujian Li
Graph-based semi-supervised learning has proven to be an effective approach for query-focused multi-document summarization.