Zero pronoun resolution aims at recognizing dropped pronouns and pointing out their anaphoric mentions, while non-zero coreference resolution targets at clustering mentions referring to the same entity.
Multi-modal knowledge graphs (MMKGs) combine different modal data (e. g., text and image) for a comprehensive understanding of entities.
no code implementations • 11 Jul 2023 • Zhouhon Gu, Zihan Li, Lin Zhang, Zhuozhi Xiong, Haoning Ye, Yikai Zhang, Wenhao Huang, Xiaoxuan Zhu, Qianyu He, Rui Xu, Sihang Jiang, Shusen Wang, Zili Wang, Hongwei Feng, Zhixu Li, Yanghua Xiao
Informal reasoning ability is the ability to reason based on common sense, experience, and intuition. Humans use informal reasoning every day to extract the most influential elements for their decision-making from a large amount of life-like information. With the rapid development of language models, the realization of general artificial intelligence has emerged with hope.
Constructing commonsense knowledge graphs (CKGs) has attracted wide research attention due to its significant importance in cognitive intelligence.
In this paper, we aim to unify MLS and CLS into a more general setting, i. e., many-to-many summarization (M2MS), where a single model could process documents in any language and generate their summaries also in any language.
The previous methods suffer from low-efficiency since they waste much time when most of the new coming concepts are indeed noisy concepts.
In detail, we regard ChatGPT as a human evaluator and give task-specific (e. g., summarization) and aspect-specific (e. g., relevance) instruction to prompt ChatGPT to evaluate the generated results of NLG models.
Given a document in a source language, cross-lingual summarization (CLS) aims to generate a summary in a different target language.
Given a document in a source language, cross-lingual summarization (CLS) aims at generating a concise summary in a different target language.
Cross-Lingual Summarization (CLS) aims at generating summaries in one language for the given documents in another language.
To overcome these drawbacks, we propose a novel generative entity typing (GET) paradigm: given a text with an entity mention, the multiple types for the role that the entity plays in the text are generated with a pre-trained language model (PLM).
Nevertheless, the correlations between knowledge implied in the multi-turn context and the transition regularities between relations in KGs are under-explored.
In this paper, we present WikiDiverse, a high-quality human-annotated MEL dataset with diversified contextual topics and entity types from Wikinews, which uses Wikipedia as the corresponding knowledge base.
Cross-lingual summarization is the task of generating a summary in one language (e. g., English) for the given document(s) in a different language (e. g., Chinese).
In this paper, we investigate the ability of PLMs in simile interpretation by designing a novel task named Simile Property Probing, i. e., to let the PLMs infer the shared properties of similes.
In this survey on MMKGs constructed by texts and images, we first give definitions of MMKGs, followed with the preliminaries on multi-modal tasks and techniques.
We present ClidSum, a benchmark dataset for building cross-lingual summarization systems on dialogue documents.
Story ending generation is an interesting and challenging task, which aims to generate a coherent and reasonable ending given a story context.
Additionally, we also introduce a knowledge-enhanced summarizer that utilizes both live commentaries and the knowledge to generate sports news.
Sports game summarization aims to generate news articles from live text commentaries.
In addition, our approach also helps to improve the accuracy of its downstream task - song search by more than 10. 6%.
Specifically, we first extend BGEM to model group-item interactions, and then in order to overcome the limitation and sparsity of the interaction data generated by occasional groups, we propose a self-attentive mechanism to represent groups based on the group members.
Furthermore, to reduce the number of parameters and improve efficiency, we further integrate coupled input and forget gates with our proposed model.