Inspired by human fact checkers, who use different types of evidence (e. g. tables, images, audio) in addition to text, several datasets with tabular evidence data have been released in recent years.
In this work, we explore the use of Large Language Models (LLMs) for knowledge engineering tasks in the context of the ISWC 2023 LM-KBC Challenge.
Evidence data for automated fact-checking (AFC) can be in multiple modalities such as text, tables, images, audio, or video.
Our aim with this paper is to elicit the user requirements for a Wikidata recommendations system.
Knowledge Graphs are repositories of information that gather data from a multitude of domains and sources in the form of semantic triples, serving as a source of structured data for various crucial applications in the modern web landscape, from Wikipedia infoboxes to search engines.
In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry.
However, such labels are not guaranteed to match across languages from an information consistency standpoint, greatly compromising their usefulness for fields such as machine translation.
Data verbalisation is a task of great importance in the current field of natural language processing, as there is great benefit in the transformation of our abundant structured and semi-structured data into human-readable formats.
Wikidata is one of the most important sources of structured data on the web, built by a worldwide community of volunteers.
The system uses a hybrid of content-based and collaborative filtering techniques to rank items for editors relying on both item features and item-editor previous interaction.
While Wikipedia exists in 287 languages, its content is unevenly distributed among them.
We explore the problem of generating natural language summaries for Semantic Web data.
For these methods to work, they require a critical resource: a lexicon that is appropriate for the task at hand, in terms of the range of emotions it captures diversity.
Our model is based on a Recurrent Neural Network (RNN) that is trained over concatenated sequences of comments, a Convolution Neural Network that is trained over Wikipedia sentences and a formulation that couples the two trained embeddings in a multimodal space.