no code implementations • COLING 2022 • Haoxiang Shi, Rongsheng Zhang, Jiaan Wang, Cen Wang, Yinhe Zheng, Tetsuya Sakai
Pre-trained Language Models (PLMs) are the cornerstone of the modern Natural Language Processing (NLP).
1 code implementation • 9 Jan 2024 • Jiaan Wang, Jianfeng Qu, Kexin Wang, Zhixu Li, Wen Hua, Ximing Li, An Liu
Knowledge-grounded dialogue (KGD) learns to generate an informative response based on a given dialogue context and external knowledge (\emph{e. g.}, knowledge graphs; KGs).
no code implementations • 16 Dec 2023 • Zhiwei Zha, Jiaan Wang, Zhixu Li, Xiangru Zhu, Wei Song, Yanghua Xiao
To collect concept-image and concept-description alignments, we propose a context-aware multi-modal symbol grounding approach that considers context information in existing large-scale image-text pairs with respect to each concept.
2 code implementations • 16 Sep 2023 • Jiaan Wang, Yunlong Liang, Zengkui Sun, Yuxuan Cao, Jiarong Xu
With the recent advancements in large language models (LLMs), knowledge editing has been shown as a promising technique to adapt LLMs to new knowledge without retraining from scratch.
1 code implementation • 9 Aug 2023 • Jingdan Zhang, Jiaan Wang, Xiaodan Wang, Zhixu Li, Yanghua Xiao
Multi-modal knowledge graphs (MMKGs) combine different modal data (e. g., text and image) for a comprehensive understanding of entities.
no code implementations • 17 Jun 2023 • Jiaan Wang, Jianfeng Qu, Yunlong Liang, Zhixu Li, An Liu, Guanfeng Liu, Xin Zheng
Constructing commonsense knowledge graphs (CKGs) has attracted wide research attention due to its significant importance in cognitive intelligence.
1 code implementation • 22 May 2023 • Yunlong Liang, Fandong Meng, Jiaan Wang, Jinan Xu, Yufeng Chen, Jie zhou
Further, we propose a dual knowledge distillation and target-oriented vision modeling framework for the M$^3$S task.
no code implementations • 16 May 2023 • Jiaan Wang, Fandong Meng, Duo Zheng, Yunlong Liang, Zhixu Li, Jianfeng Qu, Jie zhou
In this paper, we aim to unify MLS and CLS into a more general setting, i. e., many-to-many summarization (M2MS), where a single model could process documents in any language and generate their summaries also in any language.
no code implementations • 4 May 2023 • Yunlong Liang, Fandong Meng, Jinan Xu, Jiaan Wang, Yufeng Chen, Jie zhou
Specifically, we propose a ``versatile'' model, i. e., the Unified Model Learning for NMT (UMLNMT) that works with data from different tasks, and can translate well in multiple settings simultaneously, and theoretically it can be as many as possible.
1 code implementation • 29 Mar 2023 • Yuxuan Cao, Jiarong Xu, Carl Yang, Jiaan Wang, Yunchao Zhang, Chunping Wang, Lei Chen, Yang Yang
All convex combinations of graphon bases give rise to a generator space, from which graphs generated form the solution space for those downstream data that can benefit from pre-training.
1 code implementation • 7 Mar 2023 • Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, Jie zhou
In detail, we regard ChatGPT as a human evaluator and give task-specific (e. g., summarization) and aspect-specific (e. g., relevance) instruction to prompt ChatGPT to evaluate the generated results of NLG models.
no code implementations • 28 Feb 2023 • Jiaan Wang, Yunlong Liang, Fandong Meng, Beiqi Zou, Zhixu Li, Jianfeng Qu, Jie zhou
Given a document in a source language, cross-lingual summarization (CLS) aims to generate a summary in a different target language.
1 code implementation • 15 Dec 2022 • Yunlong Liang, Fandong Meng, Jinan Xu, Jiaan Wang, Yufeng Chen, Jie zhou
However, less attention has been paid to the visual features from the perspective of the summary, which may limit the model performance, especially in the low- and zero-resource scenarios.
no code implementations • 14 Dec 2022 • Jiaan Wang, Fandong Meng, Yunlong Liang, Tingyi Zhang, Jiarong Xu, Zhixu Li, Jie zhou
In detail, we find that (1) the translationese in documents or summaries of test sets might lead to the discrepancy between human judgment and automatic evaluation; (2) the translationese in training sets would harm model performance in real-world applications; (3) though machine-translated documents involve translationese, they are very useful for building CLS systems on low-resource languages under specific training strategies.
1 code implementation • 1 Dec 2022 • Shaohui Zheng, Zhixu Li, Jiaan Wang, Jianfeng Qu, An Liu, Lei Zhao, Zhigang Chen
Cross-Lingual Summarization (CLS) aims at generating summaries in one language for the given documents in another language.
1 code implementation • 24 Oct 2022 • Duo Zheng, Tao Kong, Ya Jing, Jiaan Wang, Xiaojie Wang
Additionally, IRTF could generate pseudo input regions for the REC task to enable a uniform way for sharing the identical representation space across the REC and REG.
1 code implementation • 18 Jul 2022 • Jiaan Wang, Tingyi Zhang, Haoxiang Shi
Sports game summarization aims to generate sports news based on real-time commentaries.
1 code implementation • 17 Jul 2022 • Kexin Wang, Zhixu Li, Jiaan Wang, Jianfeng Qu, Ying He, An Liu, Lei Zhao
Nevertheless, the correlations between knowledge implied in the multi-turn context and the transition regularities between relations in KGs are under-explored.
no code implementations • 23 Mar 2022 • Jiaan Wang, Fandong Meng, Duo Zheng, Yunlong Liang, Zhixu Li, Jianfeng Qu, Jie zhou
Cross-lingual summarization is the task of generating a summary in one language (e. g., English) for the given document(s) in a different language (e. g., Chinese).
2 code implementations • 11 Feb 2022 • Jiaan Wang, Fandong Meng, Ziyao Lu, Duo Zheng, Zhixu Li, Jianfeng Qu, Jie zhou
We present ClidSum, a benchmark dataset for building cross-lingual summarization systems on dialogue documents.
1 code implementation • 29 Jan 2022 • Jiaan Wang, Beiqi Zou, Zhixu Li, Jianfeng Qu, Pengpeng Zhao, An Liu, Lei Zhao
Story ending generation is an interesting and challenging task, which aims to generate a coherent and reasonable ending given a story context.
1 code implementation • 24 Nov 2021 • Jiaan Wang, Zhixu Li, Tingyi Zhang, Duo Zheng, Jianfeng Qu, An Liu, Lei Zhao, Zhigang Chen
Additionally, we also introduce a knowledge-enhanced summarizer that utilizes both live commentaries and the knowledge to generate sports news.
2 code implementations • 12 Oct 2021 • Jiaan Wang, Zhixu Li, Qiang Yang, Jianfeng Qu, Zhigang Chen, Qingsheng Liu, Guoping Hu
Sports game summarization aims to generate news articles from live text commentaries.
1 code implementation • Findings (EMNLP) 2021 • Duo Zheng, Zipeng Xu, Fandong Meng, Xiaojie Wang, Jiaan Wang, Jie zhou
To enhance VD Questioner: 1) we propose a Related entity enhanced Questioner (ReeQ) that generates questions under the guidance of related entities and learns entity-based questioning strategy from human dialogs; 2) we propose an Augmented Guesser (AugG) that is strong and is optimized for the VD setting especially.
1 code implementation • 27 Jun 2021 • Jiaan Wang, Zhixu Li, Binbin Gu, Tingyi Zhang, Qingsheng Liu, Zhigang Chen
In addition, our approach also helps to improve the accuracy of its downstream task - song search by more than 10. 6%.